qwerty_logo_header_2024
Free Assessment

4 Outsourcing Mistakes Companies Still Make

outsourcing

Control freaks, blame games, and other misguided attempts at building a better business through IT outsourcing.

There's still no script for the Great American IT outsourcing project. But today's most common outsourcing pitfalls have less to do with technology and everything to do with relationships and communication. Or lack thereof.

"Both companies have to rise to the occasion to make it work," says Romi Mahajan, president of marketing consulting firm, the KKM Group, which outsources some of its IT operations.

Nevertheless, communication breakdowns and finger pointing frequently derail even the best-laid outsourcing plans. Here are four missteps to avoid.

1. You play the blame game
Whether onshoring in Kansas City or offshoring in India, outsourcing is a relationship. If communication is poor or scarce and blame is passed around, you won't form a lasting business relationship.

Mahajan, who will be a panelist at a session at Interop New York (Sept. 29 – Oct. 3) called "Outsourcing, Virtualization and Cloud Computing - A Sign of the Broken Relationship Between IT and The Business?," gets to see outsourcing from both sides of the fence. He's an advisor to India-based outsourcer Advaiya Solutions, as well as president of KKM Group.

Mahajan notes that complex projects, such as a CRM implementation, can run into big trouble over a minor technical glitch or a miscommunication between people.

"The customer blames the offshore company for overpromising, and a routine issue then becomes a mountain of a problem," he says. "I see this kind of 'us versus them' thinking a lot. It delays projects for months."

2. You focus on pay rates over results
Companies outsourcing IT resources often fixate on getting the lowest hourly rate without seeing the big picture, says Forrester principal analyst Liz Herbert. They should always ask: What is the overall pyramid -- are we getting low rates but a huge team?

Outsourcer A, for example, could have rates of $100 per hour, but staff the project with lots and lots of junior, offshore talent with limited experience. Outsourcer B could have rates of $200 per hour, and staff the project with one-third the number of workers as Outsourcer A but use more experienced people.

3. You're a control freak
Companies should be careful not to micromanage. The whole point of choosing an outsourcer is for IT skills and expertise, but too many companies squash efficiencies by over-dictating how a project should run, says Herbert.

For instance, a company contracts with an outsourcer for SAP support on systems used to run a manufacturing plant. The real goal of the project is to improve plant output. However, if the company micromanages the specific technologies and SLAs, it'll lose out on the outsourcer bringing in its own ideas to help reach the real business goal: plant production.

Outsourcing pricing models commonly use the concept of full-time equivalents (FTEs), which measures how many full-time workers are needed to perform tasks. But Herbert says Forrester research shows that if a company moves from an FTE-based approach to one where the outsourcer manages the resources according to what it thinks is best throughout the project, the client can save 20% to 30% in most cases.

"The client rarely has the right expertise to know exactly what mix and how many FTEs they need -- and, of course, this changes over time."

And having SLAs that are not aligned with business goals can lead to troubling scenarios where the outsourcer is technically achieving all the SLAs but the client is not getting business results. "It's a case of 'all the SLAs are green' but our business goal is red," says Herbert.

4. You think you can outsource your whole brain
However, there's a danger in straying too far from the control freak. Mahajan says too many companies think their job is done just because they've signed an outsourcing contract -- they assume the outsourcer is a mind reader and give it too much leeway.

"It's almost impossible to convey 100% the desired outcome of an IT outsourcing project," says Mahajan. "Only the company knows exactly what it wants to achieve. Don't think the outsourcer will do all the thinking for you."

Business goals can be moving targets, and if companies don't keep explaining those goals throughout the process, the relationship will sour fast.

The mistake of "outsourcing your brain" can signal poor planning as well as leadership problems within the company, says Michele Chubirka, senior security architect at Packet Pushers, a popular podcast for networking pros.

"If you're not dealing with your own conflicts between IT and the business, outsourcing will not fix them," Chubirka says. "An outsourcer should always provide expertise that you don't have in-house -- period."

(Chubirka will also be on the Interop New York panel with Mahajan.)

Indian outsourcer Advaiya (where Mahajan serves as an advisor) dealt with such a conundrum when it developed an entire technical framework for several hundreds of thousands of dollars for a well-known tech vendor. The vendor decided it didn't want to use the framework but didn't communicate that to Advaiya.

"We did all this work and the vendor didn't want to pay at the end, but they didn't update our team along the way," Mahajan says. "We were working in a vacuum."

View the original article here

We put a ton of trust in technology everyday, but are you confident enough in two-factor authentication to give out any of your passwords? Christopher Mims of The Wall Street Journal is. In a post on the site proclaiming that passwords are "finally dying," Mims extolls the virtues of the secure login method immediately after giving out his Twitter password. He says that he's confident he won't be hacked because, among other reasons, the second authentication step (a text message containing a numerical code that's sent to the user's cellphone, or an app that generates a code should you be outside of cellular data range) is apparently difficult to intrude upon. As Forbes has spotted though, Mims' Twitter account has since been slammed with people trying to login to it, his phone blew up with authentication codes as a result, forcing him to associate a different phone number with the microblogging service.

The lesson here? If you're willing to put your online identity up for grabs, prepare for the consequences. It could've been a lot worse for for Mims, though -- it's not like he gave out his Social Security Number or anything.

View the original article here

What will smartphones do next? Check out these technologies and ideas for next-gen gadgets.

First up, why do we still call that familiar little friend in our pocket a "smartphone"? A phone? Sure, we still make or take calls, but it's high time the name of the device changed to reflect the use we're making of it today -- and the multitude of uses it will have tomorrow.

No, I'm not thinking "pocket computer," which sounds like something Flash Gordon might have carried in his utility belt. How about "life key"? Or just "my control." Because it takes only a moment's reflection to realize where that thin client in constant use is headed. It's part of a world-changing digital fabric, and woven together with machine-to-machine data, the Internet of Things, and the cloud (obviously), it's going to be the central dashboard for our working lives and our leisure lives -- assuming there continues to be a distinction.

We're nearing a major break in the smartphone's history. Whether that break will be sudden and clean, or gradual and sticky, is yet to be seen. It all started with the slow evolution of telephones: from separate ear and mouthpieces to single handsets; from rotary to push-buttons; and then from the table in the hallway to wireless portability. At some point, however -- whether you credit BlackBerry, Nokia, Palm, or Apple -- smartphone development irrevocably merged with the evolution of computing and the Internet.

In this slideshow, we take a look at six cool technologies coming to your pocket in the near to medium future. We're just scraping the surface, of course, but we think these technologies are at least representative of the direction so-called smartphones are going to take.

Specifically:

Smartphones will find ways to be off the grid, both as far as power is concerned, and in terms of being available for use always and anywhere.They're going to be part of an ecosystem of connected devices -- in fact, they're probably going to be the system hub.What's more, they're going to change the way we experience the actual world, transforming our surroundings into an ever-connected, informational environment.There'll be much greater flexibility in modes of operation. The days of finger-pecking will be over.And the physical properties of smartphones will start to adapt to functional needs.

Smartphones? Why do they even need to resemble handsets? Forget smartphones -- we're looking ahead to wearable (certainly), implantable (maybe), and even invisible (why not?) devices... but probably not shoephones.

View the original article here

Electronic medical records help healthcare organizations improve patient care, but lack of standardization could cause safety and security problems.

The foundation hospitals built when they overwhelmingly adopted electronic medical records is trembling under the weight of concerns over security and lack of standardization.

Healthcare organizations already see plenty of benefits from EMRs. The Internet is full of success stories detailing how hospitals save and improve lives, reduce costs, and enhance research capabilities through new access to real-time data. Many EMR applications are high-quality tools that take users' needs and wishes into account and evolve to meet mandates and clinicians' changing requirements.

Yet healthcare sometimes seems to operate in a vacuum. It appears determined to repeat the steps already taken by industries such as finance instead of skipping the proprietary isolationist years and leaping right into the era of standards, collaboration, and data-sharing. The government is starting to shake an interoperability stick, but the industry should act on its own initiative to allow disparate systems to work together -- and not only to cut costs for healthcare provider implementations. Standardizing also will improve patient safety, care, and results, experts say, resulting in reduced care costs and data security. Establishing standards will accomplish this by enforcing guides for healthcare employees and restricting access against unauthorized users.

At least one report suggests these predictions are on track. Concerned that increased use of EMRs tallied with an uptick in "patient safety events," the Division of Laboratory Programs, Standards and Services in the Center for Surveillance, Epidemiology and Laboratory Services, within the Centers for Disease Control and Prevention (CDC), studied errors in labs based on electronic health record (EHR) data. In some cases, labs used outdated software that didn't support current coding -- an issue that might increase when ICD-10 finally arrives.

Different facilities also use dissimilar codes for the same tests, creating confusion -- especially among staff members who move among different hospitals and clinics, according to a CDC report. In one case, the report cited, a woman required a hysterectomy after an EMR moved her abnormal test results to the bottom of the screen instead of placing the most recent results at the top. In another, a male patient received a double dose of a blood thinner due to an EMR error.

Other areas of concern: inadequate data transfer from one EHR to another, data entry in the wrong patient record, incorrect data entry, failure of the system to function correctly, and incorrect configuration, patient safety organization ECRI Institute wrote in a separate report.

"Recognizing that such errors can occur without health IT systems, there is cause for concern as an occasional error in a health IT can be replicated very quickly across a large number of patients," the CDC's report said. "Combining documented patient safety events with the anecdotal evidence shared by individual laboratory professionals across the US presents enough concern to warrant further investigation and mitigation."

The lack of EMR standards creates a greater security burden on healthcare organizations and professionals. But the stakes are incredibly high, not only because of the number of patients who could be impacted by a single breach, but also because of the sensitive nature of the date stored in EMRs and the potential for damage to an organization's reputation.

"We're in an historic time within healthcare. The impact from a healthcare perspective has the same impact as, say, a retail breach, but you're talking about personal health information, things that should be very private," said Ken Bradberry, CTO and vice president at Xerox Healthcare Provider Solutions, in an interview. "We're talking about strategies in healthcare that haven't evolved at the rate they should have. Security has to evolve and align with where we're at with the delivery of electronic health records and the delivery of services in general. The detection and [prevention] of security breaches [and] threats has to be of paramount importance to healthcare providers."

Now that more than 93% of hospitals use at least one EMR, government agencies, researchers, and pundits point to worrisome trends that could -- left unfixed -- jeopardize patients' faith in providers, payers, and the overall system. The drive among providers to forge partnerships and integrate EMRs between smaller practices, hospitals, accountable care organizations (ACOs), health information exchanges (HIEs), and other members of the healthcare ecosystem creates additional links in the chain -- and more potential points of breach, loss, or theft.

"The government is pushing for EHRs, but no one is overseeing the security and privacy of the records," said Karl Volkman, chief technology officer at Microsoft Gold Certified partner SRV Network. "Instead, it's left up to the individual organizations, which may allow medical personnel to alter records incorrectly with little oversight -- or the entire system may not have the capacity to protect from fraudulent encounters. Instead of rewarding and punishing those who have or have not switched to EHRs, the government should consider instilling standards to identify inappropriate use of the records, fraud, and breaches."

View the original article here

Another month of security updates from Microsoft means, once again, another round of fixes for the company's Internet Explorer (IE) Web browser, as well as a set of updates for the Windows operating system, for both the server and desktop editions.

Overall, Microsoft has issued six bulletins in July's "Patch Tuesday" collection of software fixes. Microsoft issues these collections on the second Tuesday of each month, hence the name "Patch Tuesday."

Two of the patches are marked as critical, meaning they address defects in Microsoft's software that could be readily exploited by malicious attackers to compromise systems. One of the critical bulletins is for IE, and the other one is for Windows.

Three of the remaining bulletins are denoted as "important" by Microsoft and one as "moderate." These bulletins cover Windows and the messaging component of Windows Server.

A single bulletin may cover multiple patches for a single piece of software, such as Microsoft Windows.

Wolfgang Kandek, chief technology officer for security firm Qualys, advised administrators to look at the IE patches first. IE update MS-14-037 addresses one publicly disclosed vulnerability and 23 privately reported vulnerabilities. The critical patches in this set all address vulnerabilities that could lead to remote code execution, which would allow an attacker to gain privileges on a machine by tricking a user to view a specially crafted Web page using the browser.

The critical Windows update MS14-038 covers a remote execution vulnerability that originates in a faulty way for how Windows opens files in the Windows Journal file format. Windows Journal is Microsoft's software for capturing handwritten notes on a computer. It can be used not only for touch-enabled devices, but also for other non-touch Windows computers to read files in that format.

If an organization does not use the Journal format, it may be a good idea to turn off the capability altogether in its Windows machines, so as to reduce the "attack surface" of these computers, Kandek said. In general, it is a good idea to turn off any unneeded services in computers if an administrator has the time to do this, he said.

While administrators are in the mode of testing and applying software patches, they should also take a close look at the critical patches Adobe has issued Tuesday for its Flash player.

Oracle shops should also prepare for Oracle's quarterly round of patches, due to be issued Thursday.

IE tends to get the most of the fixes in Patch Tuesday not necessarily because it is inherently more buggy than other Microsoft software, but because it is widely used software that could provide an entry point for outsiders to break into the computers that run the browser. As a result, it is under such scrutiny by both malicious attackers and security researchers.

IE is not necessarily any more buggy than other popular browsers, such as Google Chrome or Mozilla's Firefox. Both Google and Mozilla have automatic updates for their browsers, so a vulnerability can get addressed as soon as the developers create a patch to fix the problem, noted Amol Sarwate, the director of Qualys' Vulnerability Labs. As a result, such bugs and their attendant fixes are rarely called out in the press, unless they are critical in nature.

View the original article here

The U.S. Senate Intelligence Committee approved Tuesday a cybersecurity bill that would pave the way for sharing of information between government and the private sector on security threats.

Senate Intelligence Committee Chairwoman Dianne Feinstein, a Democrat from California, and Vice Chairman Saxby Chambliss, a Republican from Georgia, said that the committee had approved the bill in a 12-3 vote.

The Cybersecurity Information Sharing Act has been criticized by civil liberties and privacy groups because of the potential privacy implications of the sharing of data by companies with the government. Information including communications content shared with the government could potentially be used in various law enforcement investigations, including the investigation and prosecution of government whistle-blowers, the groups wrote in a letter in June to the Senate Committee.

Senators Ron Wyden and Mark Udall, both Democrats who voted against the bill, said Tuesday that there was a need for sharing of information by the government and companies on cybersecurity threats, but demanded that there should first be strong protections for Americans' constitutional privacy rights.

"....we have seen how the federal government has exploited loopholes to collect Americans' private information in the name of security," the senators said in a statement, in an apparent reference to disclosures by former National Security Agency contractor Edward Snowden about bulk surveillance by the agency of people in the U.S. and abroad.

The bill seems to disregard the revelations about NSA surveillance and includes no new civil liberties protections, wrote Greg Nojeim, senior counsel at the Center for Democracy & Technology in a blog post ahead of the committee decision. "As with most Intelligence Committee mark ups, this one will be held secretly, thus depriving the public of much information about the matters the Committee considered," he added.

The bill requires the director of national intelligence to increase the sharing of classified and unclassified cyberthreat information with the private sector, and authorizes companies and individuals to share voluntarily cyberthreat information with each another and the government for cybersecurity purposes only, and after taking measures to prevent sharing of personally identifying information, according to a statement Tuesday by Feinstein and Chambliss, who also authored the bill.

It also provides liability protections for individuals and companies that appropriately monitor their networks or share cyber information.

"To strengthen our networks, the government and private sector need to share information about attacks they are facing and how best to defend against them," Feinstein said in the statement. "This bill provides for that sharing through a purely voluntary process and with significant measures to protect private information." One of the amendments to the bill adopted Tuesday further strengthens privacy protections in the bill, the senators said, without providing details.

A similar bill, called the Cyber Intelligence Sharing and Protection Act, was passed by the U.S. House of Representatives but did not make it through the Senate after the White House stressed the importance of having privacy protections built into the legislation.

Mike Rogers, chairman of the Permanent Select Committee on Intelligence of the U.S. House of Representatives, and ranking member C.A. Dutch Ruppersberger on Tuesday welcomed the decision of the Senate Intelligence Committee and urged the full Senate to move quickly to pass "this important legislation." The House has its own bill on cybersecurity and the two representatives hoped the House and Senate would come together to "craft a final bill that secures our networks and protects privacy and civil liberties."

View the original article here

avg toolbar
Implementation issues with AVG Secure Search, a browser toolbar from antivirus vendor AVG Technologies that's supposed to protect users from malicious websites, could have allowed remote attackers to execute malicious code on computers.

The toolbar, also known as AVG SafeGuard, supports Google Chrome, Internet Explorer and Mozilla Firefox running on Windows XP and later, and is often bundled as an optional installation with popular free software programs.

According to researchers from the CERT Coordination Center (CERT/CC) at Carnegie Mellon University, versions 18.1.6 and older of AVG Secure Search and AVG SafeGuard install an ActiveX control called ScriptHelperApi in Internet Explorer that exposes sensitive functionality to websites.

"This control does not internally enforce any restrictions on which sites may invoke its methods, such as by using the SiteLock template," said Will Dormann, a vulnerability analyst at CERT/CC, in a security advisory published Monday. "This means that any website can invoke the methods exposed by the ScriptHelper ActiveX control."

Furthermore, upon installation, ScriptHelper is automatically placed on a list of pre-approved ActiveX controls in the system registry, bypassing a security feature first introduced in Internet Explorer 7 that prompts users for confirmation before executing ActiveX controls. It's also excluded from IE's Protected Mode, a security sandbox mechanism, Dormann said.

All these conditions make it possible for an attacker to execute malicious code on the computer of a user who has a vulnerable version of AVG Secure Search installed, if the user opens a specifically crafted HTML Web page, email message or attachment in Internet Explorer. The rogue code would be executed with the privileges of the logged-in user, Dormann said.

AVG fixed the security issue in AVG Secure Search 18.1.7.598 and AVG Safeguard 18.1.7.644 released in May. It's not clear if the toolbar updates itself, so users should make sure that they download and install the latest version if they intend to keep using it.

AVG did not immediately respond to a request for comment.

According to Dormann, this AVG Secure Search flaw is the perfect example of how third-party programs bundled with free software -- commonly known as adware, bloatware or foistware among users -- can increase the security risks for Internet users.

"Free software isn't always free," Dormann warned in a blog post in which he described how his attempt to download and install a popular video player through Download.com resulted in four third-party programs being offered during and after the installation process, leaving him with a "nearly unstable" operating system.

"If you must use a service known for bundling adware into their installers, pay careful attention to the installation steps to make sure to opt out of any additional software choices provided," Dormann said. "Even installing applications such as Oracle Java or Adobe Flash may result in unwanted software, such as browser toolbars, if you are not careful."

One of the strategies to stay safe on the Internet involves minimizing the computer's attack surface by restricting the number of installed applications that could be targeted, Dormann said. "More software is not the solution, it's the problem."

View the original article here

Monitoring an IT network is an essential part of any business' routine processes, yet so many companies fail to make sure they do this properly.

The risks of not monitoring a network are numerous. Among these, a failure to do so can lead to serious IT stability problems, as well as performance and reliability issues. On the security front, there are dangers of network attacks going unnoticed, as well as the potential for user misuse: according to a recent Forrester report, network security is so important that businesses spent over a fifth of their total security budget on it last year.

Here we've detailed the top five reasons why you must monitor your network:

Stability
Monitoring your network enables you to ensure that every aspect of your technology is stable and functioning as you expect. It is a crucial part of checking that systems are interacting correctly, and that there are no bottlenecks in the network where data is stuck.

As an indication of just how important stability is to companies, a recent survey by Opsview showed that 77% see the need for network stability as the main reason for monitoring.

Performance
A network simply has to perform to the levels required by all of its users, both within a company and externally (such as for partners and customers). Failure to do so can lead to staff being unable to operate effectively, and customers abandoning a purchase or not being provided with the level of service they have paid for. Businesses need to be clear about the capacity and speed requirements of their network, then use monitoring to verify what is happening. Around one fifth of those that monitor their network say performance is their main reason, though many others include it as a secondary key factor.

Reliability
Aside from a network simply performing to the levels expected, it must also work at all times. It is not unusual for a business to expect 99.9% uptime from their technology, and monitoring is an essential method for businesses to ensure success in this area. Real-time alerts in network monitoring software enable businesses to know as soon as there is any reliability problem, while the software can also alert them of potential issues that are emerging on this front.

Security (from external threats)
Businesses face an abundance of threats to their technology and their data, including hacking, phishing and other malicious damage. By combining the right security software with strong network monitoring, companies can identify weaknesses in their security setup as well as be alerted to any unusual traffic events, enabling a fast response to potential threats. On a scale of 1 to 5, where five is highly important, respondents to the survey placed the immediate alerting of network issues (including security and performance) at 4.69 on the scale, meaning it is crucial to them.

Preventing user misuse
IT network monitoring enables businesses to identify and stop any user misuse of their technology. Insider threats continue to be prevalent in businesses, particularly when it comes to unauthorized access by employees to sensitive data they do not need for their job. By using the right monitoring technology, businesses can see which users are doing what on the network, as well as receive alerts on any behavior that is classified as not safe.

Try out QWERTY's network monitoring with a 30 day risk free guarantee.

View the original article here

With proper preparation, cloud-based disaster recovery will enable your organization to weather the season of storms and power outages.

The season of power outages has arrived. We can expect coastal tropical storms and hurricanes and Midwest twisters and tornadoes to bring us a season of outages and, unfortunately, lots of loss. Government agencies, if not properly prepared, will see applications and data centers swept away with the same speed and suddenness of the wild weather winds -- even with so much advanced technology and outstanding preemptive tools and systems available.

It was only two years ago when Hurricane Sandy hit the East Coast, knocking out data centers from Virginia to New York to New Jersey. They lost public power and went dark for days, causing vulnerability and leaving lots of important information unavailable.

For government agencies that use their own internal data centers to house applications, public multi-tenant clouds offer a lower-cost, easy-to-deploy disaster recovery/continuation of operations (DR/COOP) solution. The steps below can help data centers plan and execute effectively with minimal to no disruption in the production environment.

1. Know your mission-critical applications. Determine which of your Web-based applications cannot go down for even a short (or extended) period of time. Identify these applications along with their dependencies and minimal hardware requirements to operate. Document your findings as these will become part of your DR/COOP plan and will help you when you move on to step two.

2. Choose a compliant cloud service provider or give a checklist to the one you have. Identify the right cloud service provider (CSP) that can support your business and technical requirements. If possible, choose a CSP that uses the same hypervisor that you use in-house. This will make mirroring a lot easier, faster, and cheaper in the long run.

3. Configure remote mirrored virtual machines. Depending on the hypervisor you currently are using for virtualization, either set up the data center to automatically mirror these virtual machines (VMs), or arrange to manually set up the remote VMs. Either way, make sure there is a mirrored VM for each production system that needs emergency backup.

4. Set up the failover to be more than just DNS. With the mirrored VMs tested and in place, it's time to select a technology that will handle the failover if and when a disaster occurs. When selecting this technology, avoid one that depends on a change to Internet domain name system records. While a DNS change will work, in most cases there will be a downtime of many hours or possibly even more than a day before users can reach the DR/COOP site. Therefore, seek a technology that can detect a failure in your primary data center and redirect end users instantly to the DR/COOP solution.

5. Perform regular failover tests. With the above steps complete, the final step is performing the end-to-end failover test, which must be routinely tested with the DR/COOP site. Depending on internal policies, this test may be as small as one application's individual failover, or you may wish to schedule a full site failover. Whichever is done, it is important to document the process, the steps taken when performing the test, and a clear record of results after each test is done. If your failover plan did not work, refer back to your documentation, identify what did not work as expected, make the adjustments to your plan (and documentation), and test again. You may need to do this multiple times until you have a bulletproof failover plan.

While we can't control Mother Nature, we can control our preemptive strikes against data disasters. A single emergency can take down a data center, but it only takes a simple plan and proper preparation to prevent disaster. Whether you bring the expertise in-house or outsource it, make the time and budget available to properly plan so you are not out of luck during the outages.

Most IT teams have their conventional databases covered in terms of security and business continuity. But as we enter the era of big data, Hadoop, and NoSQL, protection schemes need to evolve. In fact, big data could drive the next big security strategy shift. Get the 6 Tools To Protect Big Data report today (registration required).

Yet another critical security flaw has been found for Adobe's notoriously sieve-like Flash plug-in, this time by Google Engineer Michele Spagnuolo. His exploit tool, called "Rosetta Flash" is just a proof of concept, but could allow hackers to steal your cookies and other data using malicious Flash .SWF files. The exploit is well known in the security community, but had been left unfixed until now as nobody had found a way to harness it for evil. So how does this affect you? Many companies like Twitter, Microsoft, Google and Instagram have already patched their sites, but beware of others that may still be vulnerable. Adobe now has a fix, and if you use Chrome or Internet Explorer 10 or 11, your browser should automatically update soon with the latest versions of Flash, 14.0.0.145 (check your version here). However, if you have a browser like Firefox, you may want to grab the latest Flash version from Adobe directly (watch out for unwanted add-ons with pre-checked boxes). Finally, if you use apps like Tweetdeck or Pandora, you'll need to update Adobe AIR -- that should happen automatically, but the latest version is 14.0.0.137 for Windows, Mac and Android.

Via: Krebson Security

Source: Michele Spagnuolo, Adobe

Control freaks, blame games, and other misguided attempts at building a better business through IT outsourcing.

There's still no script for the Great American IT outsourcing project. But today's most common outsourcing pitfalls have less to do with technology and everything to do with relationships and communication. Or lack thereof.

"Both companies have to rise to the occasion to make it work," says Romi Mahajan, president of marketing consulting firm, the KKM Group, which outsources some of its IT operations.

Nevertheless, communication breakdowns and finger pointing frequently derail even the best-laid outsourcing plans. Here are four missteps to avoid.

1. You play the blame game
Whether onshoring in Kansas City or offshoring in India, outsourcing is a relationship. If communication is poor or scarce and blame is passed around, you won't form a lasting business relationship.

Mahajan, who will be a panelist at a session at Interop New York (Sept. 29 – Oct. 3) called "Outsourcing, Virtualization and Cloud Computing - A Sign of the Broken Relationship Between IT and The Business?," gets to see outsourcing from both sides of the fence. He's an advisor to India-based outsourcer Advaiya Solutions, as well as president of KKM Group.

Mahajan notes that complex projects, such as a CRM implementation, can run into big trouble over a minor technical glitch or a miscommunication between people.

"The customer blames the offshore company for overpromising, and a routine issue then becomes a mountain of a problem," he says. "I see this kind of 'us versus them' thinking a lot. It delays projects for months."

2. You focus on pay rates over results
Companies outsourcing IT resources often fixate on getting the lowest hourly rate without seeing the big picture, says Forrester principal analyst Liz Herbert. They should always ask: What is the overall pyramid -- are we getting low rates but a huge team?

Outsourcer A, for example, could have rates of $100 per hour, but staff the project with lots and lots of junior, offshore talent with limited experience. Outsourcer B could have rates of $200 per hour, and staff the project with one-third the number of workers as Outsourcer A but use more experienced people.

3. You're a control freak
Companies should be careful not to micromanage. The whole point of choosing an outsourcer is for IT skills and expertise, but too many companies squash efficiencies by over-dictating how a project should run, says Herbert.

For instance, a company contracts with an outsourcer for SAP support on systems used to run a manufacturing plant. The real goal of the project is to improve plant output. However, if the company micromanages the specific technologies and SLAs, it'll lose out on the outsourcer bringing in its own ideas to help reach the real business goal: plant production.

Outsourcing pricing models commonly use the concept of full-time equivalents (FTEs), which measures how many full-time workers are needed to perform tasks. But Herbert says Forrester research shows that if a company moves from an FTE-based approach to one where the outsourcer manages the resources according to what it thinks is best throughout the project, the client can save 20% to 30% in most cases.

"The client rarely has the right expertise to know exactly what mix and how many FTEs they need -- and, of course, this changes over time."

And having SLAs that are not aligned with business goals can lead to troubling scenarios where the outsourcer is technically achieving all the SLAs but the client is not getting business results. "It's a case of 'all the SLAs are green' but our business goal is red," says Herbert.

4. You think you can outsource your whole brain
However, there's a danger in straying too far from the control freak. Mahajan says too many companies think their job is done just because they've signed an outsourcing contract -- they assume the outsourcer is a mind reader and give it too much leeway.

"It's almost impossible to convey 100% the desired outcome of an IT outsourcing project," says Mahajan. "Only the company knows exactly what it wants to achieve. Don't think the outsourcer will do all the thinking for you."

Business goals can be moving targets, and if companies don't keep explaining those goals throughout the process, the relationship will sour fast.

The mistake of "outsourcing your brain" can signal poor planning as well as leadership problems within the company, says Michele Chubirka, senior security architect at Packet Pushers, a popular podcast for networking pros.

"If you're not dealing with your own conflicts between IT and the business, outsourcing will not fix them," Chubirka says. "An outsourcer should always provide expertise that you don't have in-house -- period."

(Chubirka will also be on the Interop New York panel with Mahajan.)

Indian outsourcer Advaiya (where Mahajan serves as an advisor) dealt with such a conundrum when it developed an entire technical framework for several hundreds of thousands of dollars for a well-known tech vendor. The vendor decided it didn't want to use the framework but didn't communicate that to Advaiya.

"We did all this work and the vendor didn't want to pay at the end, but they didn't update our team along the way," Mahajan says. "We were working in a vacuum."

View the original article here

Copyright © 2024 QWERTY Concepts, Inc.
linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram