CitySmart Blog

Thursday, February 26, 2015
John Miller, Senior Consultant

Body cameras for police officers have quickly gone from an expensive novelty to something that cities need to seriously consider. Even the President is now placing pressure on cities and pushing for financial incentives to help pay for body cameras. A recent article from The Arizona Republic points out that body cameras will actually become the norm within 10 years. Like it or not, these technology-intensive cameras will eventually become part of your public safety budget—if they aren’t being considered already.

While many articles focus on the cameras, the logistics, and the politics of body cameras, many gloss over the underlying technology. If you’re using, actively planning for, or discussing the use of body cameras for your police officers, then we want to offer up a few questions you need to consider that are easy to overlook.

  1. Are you able to back up your data and recover it in case of a disaster? You should be backing up your data anyway, but it becomes even more important to recover from a disaster with all body camera footage data intact. This means a form of onsite backup that provides at least hourly snapshots of your data for quick recovery (in case of a server failure) and offsite backup that ensures you can recover your data in case a fire, flooding, tornado, or other disaster hits your city. Explore cloud solutions that offer unlimited offsite data backup storage under a set monthly cost. Otherwise, your costs could skyrocket out of control if you pay by the gigabyte or have caps to your current data backup storage.
  2. Is your data encrypted and secure? You absolutely don’t want people hacking into police footage from body cameras. This is a good time to review your security. Your body camera data needs to be encrypted onsite, offsite, and while in transit between machines (such as uploading or downloading information). That way, the information will be useless to hackers if they happen to access it. Then, you need to make sure that your network security or cloud provider security follows best practices and is monitored and maintained by experienced IT professionals.
  3. Do you have clear data retention policies that are easy to follow? A modernized storage system can help you store, archive, and find data easily. It helps when your storage repository can help you automate some of the more tedious aspects of retaining and deleting data according to the law. Body camera footage will be requested and demanded by people when a sensitive case arises, and you don’t want to be caught without data that you should actually have on hand. At the same time, you want to clear away as much data as possible if you’re not legally required to keep or store it.
  4. Do you test your ability to retrieve and successfully back up your data? Even given the precautions above, you cannot assume that everything is working properly. You absolutely must test your data backup and security to make sure that you eliminate any severe risk of a data breach or data loss. We recommend testing your data backup and security at least quarterly to make sure that all of your body camera footage is recoverable in case of a disaster and meets information security best practices. It’s becoming less and less excusable (and more embarrassing from a legal and public relations standpoint) when cities claim that data is missing or unrecoverable.

While cities might fear the costs of having to invest in body cameras, the situation gives cities an opportunity to examine the state of their current technology. Many of the questions above don’t just apply to body camera data. Data backup, disaster recovery, record retention, data storage, encryption, security, and testing come into play with all city data and information. Luckily, many of the investments needed are more cost-effective than ever.

To talk about storage, security, and data backup needs for body camera data, please contact us.

Thursday, February 19, 2015
Brian Ocfemia, Technical Account Manager

We’ve all heard overblown technology claims, such as “Apple computers never get viruses.” But they do, and when they do there is outrage and possible backlash against what’s still a pretty good product. Similarly, we still hear claims such as the cloud being 100% reliable and that upgrades and maintenance don’t interfere at all with users. Then, when there is an outage or some downtime related to maintenance, the critics point fingers and claim that the cloud did not deliver what was promised. Often, they will also use that frustration as an argument that they want to go back to hosting their own servers and bring back their software onsite.

What’s happening here is common in the world of technology (and with many other things in life). A new technology legitimately improves upon a previous technology, but the expectations are set too high. So even if expectations reach 99.9%, critics will rip apart the 0.1% that caused it to not reach 100%. But if we’re accustomed to lower expectations from old technology, then something we expect to work 85% of the time will delight us if it hits 90%, even if that means higher costs and more risk than with modern technology.

A recent article on LinkedIn lays out some common points that people bring up to shoot down the cloud based on real but skewed data. The author points out three representative points that often cause a lot of doubt, but let’s look closer at these oft-heard claims.

“Azure experienced 92 outages totaling 39.77 hours for the year. As stated by Microsoft's own Chief Reliability Strategist David Bills, cloud service failure is, "inevitable".”

Reality: By focusing only on the total amount of downtime during the year, it’s easy to miss the high percentage of total uptime. If cloud services run 24/7/365, that means Azure’s uptime during 2014 was 99.5%. And Azure was actually the anomaly by a long shot. Other common cloud services such as Rackspace, Google Cloud Platform, Joyent, and Amazon Web Services all had higher than 99.9% uptime. From our experience, these performance results easily beat out most onsite servers and match or exceed most data centers. Cloud service providers invest in plenty of redundant power lines, generators, and Internet connections that ensure such high uptime for a variety of customers. Their resources far outpace most onsite setups and smaller data centers.

“A recent Verizon 40 hour cloud shutdown proved that cloud DC maintenance is not seamless in all cases.”

Reality: First, it’s important to note that this situation with Verizon is another anomaly. The article from which the author quotes clearly says, “For an industry that generally measures downtime in minutes or several hours, this was a long shutdown.” But who ever said maintenance was seamless? It may be less intrusive than traditional ways of conducting maintenance, but an occasional interruption or pause is not unheard of. Compare these brief interruptions with the amount of downtime, staff time, and IT maintenance costs of making updates to your current onsite servers. With cloud providers, you don’t even have to think about maintenance. You may experience an occasional few minutes of downtime, and a rare anomaly might lead to an outage for hours. But the way that cloud providers conduct maintenance is much faster, less interruptive, and less costly than traditional server maintenance—by a long shot.

“Cloud providers (CPs) have a commercial interest to hype to their potential and existing customers how easy it is to migrate workloads to the cloud.”

Reality: Sure, you will hear vendors do what they always do: sell and make everything sound easy. But the author mentions another important point: “One study conducted by BT found that 32% of enterprises don’t have the skills internally to manage cloud migrations.” While a cloud provider can help with the migration, you need a strong IT staff or vendor that has done these kinds of migrations many many times. The right IT professionals will help you:

  • Investigate your situation and review your business needs.
  • Create a plan for migrating your data, settings, and programs to the cloud servers.
  • Execute the migration by means of a rigorous process, including testing and participation with business stakeholders to ensure that all is well on the go-live date.

“Many enterprises assume that once they have signed a contract with the CP that their responsibilities end.”

Reality: Obviously, that’s an incorrect assumption for any hardware or software you would use. Even when traditionally buying software from a vendor that installs a server onsite, you still have to find space for that server, connect it to your network, and maintain that server. That’s why you would have your IT staff or vendor help with patching, updates, and upgrades. With cloud service providers, you still need IT professionals monitoring your cloud data and applications, alerting you to any issues, ensuring security (such as antivirus, antispam, content filtering, etc.), updating and patching the software, and tracking your cloud assets for reporting purposes. Your IT staff or vendor will also help you with any data migration needs or day-to-day technical help.

Overreacting to abnormal data about the cloud prevents you from making a good business decision. Overblown points will scare the less technically-minded away and encourage them to stick with less secure, more risky traditional technology solutions. The two most important points to remember are:

  • The uptime and reliability you will experience in the cloud far outpaces most traditional setups.
  • You will need experienced IT staff or a vendor to guide you through the technical aspects of a cloud migration and ongoing maintenance.

To talk about migrating to the cloud in more detail, please contact us.

Thursday, February 12, 2015
Nathan Eisner, COO

A recent article from Sarasota’s Herald-Tribune reported on a sensitive political situation concerning who manages the IT department within the city. While we’re not obviously speculating or commenting on the politics involved, it was striking to see the mayor quoted as saying, “We went through all these things that nobody, but nobody, understands. We have no way of knowing what goes on in the cyberspace games we're playing.” That lack of knowledge about IT from key city officials can have devastating consequences. Follow-up articles noted that onsite data storage was at high risk for a disaster and that the city faced dangerous security risks.

In many cases, we often see conversations about IT in which important stakeholders such as elected officials and even city management don’t fully understand IT enough to understand critical risks and make good judgments about technology investments. IT often doesn’t help by remaining obscure, technical, and tactical when explaining its activities to city officials and managers. While that strategy may buy IT time, eventually it risks political explosions like those seen at Sarasota.

Key stakeholders don’t need to be technical to understand IT. Instead, it’s important that they ask the right questions of IT in order to get a good non-technical, business understanding of IT’s accomplishments and any red flags. Even if you’re a technology novice, here are some questions that are important to clarify in order for IT information presented to city officials or managers to have the most impact.

  1. Each IT service needs to be explained in terms of business impact. No IT service should be so technical that you cannot understand why it is important and what it essentially does from a non-technical perspective. Some examples include:
    • Website management and maintenance: You invest in it to ensure that your website doesn’t crash or go down, and that users (both city employees and citizens) have technical support if something is needed related to the website.
    • Data backup and disaster recovery: You invest in it to ensure that no data is lost if a server fails or a disaster (like a tornado) hits the city.
    • Server, desktop, and mobile management and maintenance: You invest in it to ensure that technology problems are detected as early as possible, and that security patches and software updates are installed in a timely fashion to keep machines safe, secure, and up-to-date.
  2. Technical, tactical tasks need to be explained at a higher level. Many IT professionals either through obfuscation or inexperience often talk about what they do in terms of technical, tactical tasks. Rather than throw up your hands because you don’t understand the jargon, you need to ask questions that raise the discussion to a higher level. For example, if your IT staff starts talking about the technical aspects of server load balancing, simply ask them to stop, remind them that they are talking to a business audience, and to explain at a higher level that focuses on how the technology is impacting business performance. Is something about server load balancing causing downtime or crashes as a result of aging hardware? Or is the server load balancing just fine, meaning all systems are running normally? If the IT representative is unable to report at this higher level, you need to communicate with a more experienced person who can talk to business stakeholders.
  3. Understand the non-technical basics of alternative technology services. All IT services are not the same, and yet many non-technical decision makers think IT services are created alike. Again, it’s fine to not understand the technical details of various services, but some examples of what any city administrator or clerk overseeing IT should know is:
    • Understanding the difference between reactive, hourly IT service (only putting out fires) versus proactive, ongoing IT service.
    • Understanding the differences between servers providing you your software applications onsite, in a data center, or in the cloud.
    • Understanding the differences between manual data backup (such as tape or hard drives) versus automated onsite and offsite data backup accomplished through servers.
  4. Understand what happens when you underinvest in a service or fail to invest in it at all. We often see decision makers get so frustrated with the cost of IT and so, without understanding much about the service, it gets heavily cut, shortcutted by a cheaper vendor, or removed because it’s considered a “nice to have.” Ideally, you will want to understand things like:
    • Reactive, hourly service that only puts out fires will lead to high, unpredictable annual expenses, constantly crashing machines, and low employee productivity and morale.
    • Managing your own onsite servers introduces higher security risks, maintenance costs, and expensive hardware upgrades every few years.
    • Failing to automate and test your data backup leads to a high risk of data loss in the event of a disaster.
  5. Review IT reporting that focuses on non-technical, business critical information. If the reporting you receive from IT is full of technical data and reams of gobbledygook, ask for a version that gives an executive summary, high-level insights, and red flags related to business issues. For example, it’s helpful for you to know that the website uptime is 99.9%, that all data backup tests occurred and there are no issues, and that a server needs replacing because it is over five years old. You don’t need to know every single website metric, information about the daily backup logs, or server load balancing data. If your IT staff or vendor cannot provide clearer, non-technical reporting, then someone with more experience needs to report to you.

While the situation in Sarasota is extreme, it shows what can happen when ignorance about what IT does adds fuel to existing political fires. As a mayor or city manager, it may be tough to introduce the topic of IT to councilmembers who don’t have day-to-day operational knowledge. Yet, it is part of your responsibility to demand and receive information that makes sense, even if you have to go back to IT a few times to demand the right kind of information you need. More importantly, a lack of understandable, business-focused answers reflects a problem. Bad IT staff or vendors often hide behind technical jargon to cover up problems or inexperience. By asking the right questions, you expose these problems to light much quicker and allow all stakeholders to understand exactly what IT is doing.

To talk about IT communication in more detail, please contact us.

Thursday, February 05, 2015
Alicia Klemola, Account Manager

Like an old car, it’s tempting to use your desktop and laptop computers until the blue screen of death beckons them into technology heaven. After all, you invested a lot of money in those computers and you want your full bang for the buck. And while you might hear that best practices indicate that you replace all hardware every 3-5 years (and more like 2-3 years for laptops), you may think of that rule applying to the more important servers rather than the “less important” everyday computers that your employees use.

However, there are critical business reasons to replace your desktop and laptop computers that affect your bottom line both directly and indirectly. Here are five things to consider when taking a look at your aging desktop and laptop computers at your organization. 

1. The cost of new computers are often cheaper than maintaining old computers.

Old computers used beyond their typical lifespans become ongoing problems. It becomes expensive for your IT staff or vendor to constantly take care of problems related to the blue screen of death, lack of memory, slow or freezing performance, and security issues. Your staff time or hourly vendor bills can easily go beyond the $500 to $1000 it might cost to buy a new computer that will have much fewer issues.

2. Newer monitors are more power-efficient.

Your older computers may include clunky, huge cathode ray tube monitors that produce a lot of heat and consume a lot of energy. Add up this energy consumption across dozens or hundreds of computers and you’re talking about a lot of power costs. Newer LCD flat screens often cut that energy consumption per monitor by more than 50%, adding up to real cost savings.

3. Older computers mean using obsolete (and even unsupported) operating systems.

We’ve seen critical issues cropping up with organizations still using Windows XP on very old computers. While Windows XP is an extreme example, similar issues are on the horizon for Windows Vista (of which mainstream support from Microsoft ended on April 10, 2012) and even the current dominant Windows 7 (of which mainstream support ended a few weeks ago on January 13, 2015). The more you cling to older operating systems, the less useful and secure they will be for employees—and the harder for your IT staff or vendor to manage.

4. Newer computers are more secure.

As the information technology industry learns more about security and what works best for computer users, more security features are baked into newer computers that keep the user’s experience as safe as possible. Newer computers have operating systems that build in security features from the ground up and any additional security (such as antivirus) is much more easily managed by your IT staff or vendor. That means more built-in virus or malware prevention than older, less secure computers. The newer your computer, the less your cybersecurity risks.

5. Newer computers can handle modern software and Internet applications.

Even if your older computers are maintained well like an old classic car, you’ll still see employees having problems using modern software or Internet applications. Perhaps a new kind of software won’t work, or works slow. Or your employees can’t watch videos or load information from important websites. Older computers simply can’t keep up with modern software (similar to how an old smartphone can’t handle modern versions of GPS software). You’re crippling your employee productivity by having them use older computers.


These considerations should help you better make the business case to switch from older to newer computers. Many cities use these and additional reasons to help them replace computers, save money, and go green. Especially consider the cyber liability issues related to older computers. If you’re unable to follow current law because your older computers cannot handle basic security needs, then you open up the door to a lot of unnecessary legal risk. Saving money is important, but keeping your organization as secure as possible is even more important.

To talk more about desktop and laptop replacement, please contact us.

Thursday, January 29, 2015
Dave Mims, CEO

Heard about denial of service attacks? That’s where hackers will pummel an organization’s website servers with tons of bogus traffic so that the website becomes impossible for people to access. A recent story from the Columbia Daily Tribune reported that the city of Columbia, Missouri experienced a denial of service attack that led to a three-day website outage. That meant citizens could not access city services and information while valuable city staff time was tied up helping deal with the emergency.

The bad news? Denial of service attacks are hard to prevent. If a relatively sophisticated hacker wants to go after you, they will likely be able to have a negative effect on your website. However, it helps when your city can respond within hours rather than days to eliminate the negative effects of a denial of service attack.

Here are some tips and best practices that you can implement to best handle a denial of service attack and recover as quickly as possible—without overtaxing your budget.

  1. Host your website in the cloud. It’s getting more and more difficult to effectively host your own website servers onsite or even in smaller data centers. By hosting your website in the cloud, you benefit from the largest, most advanced, and most secure hosting providers on the planet. Cloud data centers are usually much more capable of handling denial of service attacks than your onsite setup.
  2. Consider investing in a content delivery network. A new buzzword related to the cloud that you may occasionally hear is “content delivery network.” It’s a very technical concept, but all you need to know is that it’s a way for your website content to be copied to multiple cloud data centers across the country. Then, let’s say someone in Oregon wants to access a Georgia city’s website content. Your website content may be copied to 10 servers around the country and so a server at the closest cloud data center in Portland, Oregon ends up delivering the content to the person. By having your website content and data more geographically distributed across so many servers, it makes it harder for a denial of service attack to be as effective than if only one location is delivering up content.
  3. Make sure you back up your data. While denial of service attacks don’t usually lead to data loss, it’s still possible that you won’t be able to access critical data for a long time. It helps to have your website (and all critical) data backed up both for quick onsite recovery and offsite disaster recovery. That way, if you’re unable to access certain data or information for days, you’ll at least have a copy that’s backed up separately from your temporarily inaccessible website servers.
  4. Proactively monitor your network and set up alerts. If you’re not continuously monitoring your network and instead only reacting when something like a denial of service attack happens, then you waste valuable time in handling the problem. Investing in experienced IT professionals who monitor your network means they will detect problems related to denial of service attacks very early. They’ll address the problem almost as soon as it happens. Otherwise, you may take hours to even realize that a denial of service attack is happening and more hours calling in staff and IT consultants to start addressing it. It’s like firefighters arriving at a fire several hours late.
  5. Rely on experienced IT professionals to manage all vendor communication. If non-technical city staff need to get on the phone and try to explain what’s happening, you risk wasting valuable time and possibly handling the problem in the wrong way. Experienced IT professionals can coordinate communication with multiple vendors such as Internet service providers, cloud data centers, website hosting providers, and any other relevant vendors. There are often many technical components to recovering from a denial of service attack, and you want to make sure you have the right people helping you in that recovery.

For cities on a tight budget, simply moving your website hosting to the cloud and engaging the ongoing monitoring services of experienced IT professionals will help you more likely respond and recover from denial of service attacks in hours rather than days. Plus, these kinds of technology investments also help you with important areas such as: 

  • Cybersecurity and cyber liability
  • Website reliability and uptime
  • Data backup and disaster recovery

To talk more about mitigating the risk from denial of service attacks, please contact us.

Thursday, January 22, 2015
John Miller, Senior Consultant

One of our colleagues (let’s call him “Joe”) is particularly tech-savvy. While not an IT professional, he has been involved in the information technology field for over 10 years. He’s immersed in that world and can easily talk to us about the many nuances of data backup, website content management systems, and software. That’s why it surprised us when he called us up a few weeks ago and told us about how he eliminated a particularly nasty computer virus.

Luckily, the computer he used was brand new, so he was able to erase all his data and reset the computer to the original factory settings. However, it was a stark reminder that even the most tech-savvy people can click on the wrong attachment and download a computer virus.

We’re sharing this lesson as a case study (with “Joe’s” permission but keeping the person’s identity anonymous) in order to highlight to you the importance of making sure your information is protected. Because even well-intentioned people can accidentally upload a computer virus in a matter of seconds, we want to make sure that a virus doesn’t knock out your network or cause you to lose important information.

Here’s how it happened. 

1. Joe purchased a new computer and wanted to download the Google Chrome Browser.

Joe set up his computer and made it through the preliminary setup. He was ready to get onto the Internet. Joe prefers the Google Chrome Browser, so in order to download it he had to open up the computer’s default Internet browser and find the right webpage.

2. On a search engine, he searched for “Chrome browser download” and clicked on the first search result.

He used the computer’s default Internet browser and search engine to search for “Chrome browser download.” A list of search results displayed and Joe clicked on what he thought was the first legitimate search result.

At this point, we should note that the search engine’s ads did not look terribly different from an organic search result. Unbeknownst to Joe, he clicked on an ad, not a search result. In hindsight, he realized that the ad led to a website that was not Google’s. 

3. He landed on a seemingly legitimate Google Chrome browser download page and clicked on a button to download the browser.

Malicious sites are often good at replicating the look and feel of legitimate sites. Joe was in a hurry. Because he already thought he had clicked on the top search result (which he logically thought must be Google’s page), he assumed this page was legitimate and he clicked “Download.”

4. While going through the downloading process, he noticed many more agreements and “bundleware” than usual.

It was while he clicked “I Accept” for many pages of agreements and noticed a great deal of “bundleware” (additional software options that he could download in addition to the Chrome browser) that red flags started to go off in his head. However, he went through the entire process because many kinds of software often feature similar processes (such as Java downloads from Oracle).

5. Finally, he realized something was wrong when the Chrome browser opened and asked him for his Google username and password in an unusual way.

While the page looked somewhat like the typical Google sign-in page, there were clear differences that he was savvy enough to notice. He came within a few seconds of sharing his important Google username and password with hackers, but unfortunately he had already downloaded malware to his computer.

At this point, the antivirus program that came with his computer started alerting him that it detected malware on his computer. However, the malware was so cleverly written and installed (and remember, installed voluntarily by Joe) that it could not be removed manually. The malware kept reinstalling itself every time the antivirus program quarantined or removed it.

More dangerously, the malware hijacked his Internet browsers with fake search engine and login pages. His computer also began to take actions on his behalf that he was not agreeing to. The “bundleware” software that originally looked like innocent, helpful programs began to open up on his computer and fill his screen with pop-ups.

Luckily, the story has a positive ending, but it required some brutal tactics. Thank goodness that Joe literally only had bought the computer several hours ago and had yet to store any important data on it. He followed the steps below to combat the computer virus. 

1. Joe shut off Internet access to his computer.

Joe severed all wireless and wired Internet connections to his computer. At that point, the antivirus alerts stopped. The malware hackers needed Internet access to access Joe’s computer, so cutting off Internet access cut off the hackers’ communication channel.

2. Joe assessed if any important damage had occurred, and if any data or software programs were salvageable.

Luckily, no important data resided on the computer and Joe had not entered any login information into a browser. However, because the malware kept reinstalling itself, there was no manual way to remove the virus and maintain the integrity of his computer.

3. Joe reset his computer to the original factory settings.

This is the step that eliminated the virus, but did so at the cost of any important data on the computer. The reset took several hours, but it wiped out any extraneous programs that appeared on the computer other than the original factory installed programs.

4. Joe discovered one computer virus remnant lingering in the default Internet browser and reset the browser to its default settings.

When Joe opened up the default Internet browser, he was stunned to see a remnant of the virus lingering after even a factory reset. The browser’s home page was set to a malicious search engine page that sort of looks like Google but is clearly not Google. He restored the browser to its default settings.

5. He ran a spyware scanner to scan for any viruses still left.

A scan of Joe’s computer detected nothing. At that point, Joe was able to use his computer normally although he obviously kept an eye out for unusually slow performance, strange popups, and any interruptions or odd computer behavior when doing online banking or payments.

We’re sharing this case study to warn you that it isn’t just the non-tech savvy people who get viruses by accident. With Joe, all it took was some haste and distractions, and he went down a dark path that led to vicious malware voluntarily installed on his computer. To head off any disruptions related to events like this, we recommend that you: 

  • Back up your data, both onsite and offsite.
  • Train employees about phishing and malicious links, emails, and attachments.
  • Build strong network security.
  • Use enterprise antivirus software with IT professionals managing it.
  • Encrypt your data.

Accidents happen, so you want to make sure you’re covered in even the worst computer virus situation. That way, you mitigate the risk of losing data, losing money, and losing time spent recovering from the virus.

To talk more about antivirus protection, please contact us.

Thursday, January 15, 2015
Brian Ocfemia, Technical Account Manager

While Windows XP market share has fallen to about 18%, that still means a lot of computers are using this outdated operating system. Microsoft stopped supporting Windows XP on April 8, 2014, which means that any computers using it have not received any security patches or updates from Microsoft. Like a decaying building not kept up anymore, it becomes more and more dangerous to “live” in the condemned, abandoned house of Windows XP.

We’ve written before about some of the immediate malware and security risks that immediately started to happen once Microsoft cut off support. Because we still see many computers using Windows XP, we wanted to review some new risks along with some earlier warnings that grow more urgent with each passing day.

1. Security problems are so significant that vendors are starting to refuse service for machines using Windows XP.

We recently saw an email from one of our client’s software vendors in which the vendor stated that any computers with Windows XP would be blocked from accessing that vendor’s servers. In other words, the security risks of Windows XP have become so significant that vendors may soon not even want to deal with a potentially contaminated computer. In our example, the email was spurred by a new security vulnerability that the vendor had to protect itself against. While the vendor could ensure that its own equipment and any modern client equipment and software were protected, the vendor could not ensure that Windows XP computers were protected.

2. These kinds of security problems will only get worse.

Based on past experience, we feel that this vendor warning is only the tip of the iceberg. Again, consider the house example. When a house is abandoned and no one keeps it up, will it become a better place to live over time? Or a worse place to live? The longer Windows XP exists, the more it is vulnerable to greater and greater hacking attempts, security vulnerabilities, and cyber liability. Eventually, these massive security problems will place your organization at such high risk that you could be legally at fault for negligence if something bad happens.

3. The newest versions of Microsoft Office won’t work with WIndows XP.

One of the most commonly used productivity software packages, Microsoft Office, simply won’t work with Windows XP in its newest versions. With so many organizations now using software like Microsoft Office in the cloud, that means more organizations are using the newest versions. If you’re working with employees or outside vendors who use Microsoft Office 2013 and you literally cannot open the files, you’re unnecessarily slowing your productivity to a crawl. And like the earlier security problems, this compatibility problem will only get worse over time.

4. Windows XP limits your organization from using modern software.

Because Windows XP was created in 2001, that means it’s 13 years behind modern software. So many improvements came along in Windows Vista, 7, and 8 that were added to keep up with the rapid pace of information technology. When your organization needs new software for accounting, project management, agenda and meeting management, document management, or other important business functions, you’re restricted from many choices because of Windows XP. It’s like wanting to make major improvements to a house that doesn’t have indoor plumbing or three-pronged electrical outlets.

5. Windows XP cripples your IT staff or vendor from managing your network.

Modern operating systems improved the way that IT professionals can manage and oversee your network. That includes things like managing security patches, user permissions, and remote help. The city of Detroit is struggling with this exact issue, and even with a new CIO the city’s IT environment is considered “dysfunctional” with so many computers on Windows XP. If your IT staff or vendor is prevented from properly administering your IT network, that puts you at risk and make IT’s job ridiculously hard with no guarantee of successful service.


Sometimes, we make recommendations—even strong recommendations—about certain technologies. But this post is more than a recommendation. Quite simply, if you choose to not replace your Windows XP machines, you are placing your organization at great risk. Plus, in order to benefit from the many low-cost, high-impact software and IT services that continually improve all the time, you need to modernize your IT environment. Newer operating systems will be a breath of fresh air to your employees, your organization’s productivity, and your IT staff or vendor management.

To talk about these concerns in more detail, please contact us.

Thursday, January 08, 2015
Nathan Eisner, COO

As organizations continue to shift their hardware, software, and data storage into the cloud, there are just as many organizations still clinging to more traditional technology setups with onsite servers, software installation, and long-term licenses. Despite significant technology advances, it’s easy to grow accustomed to traditional yet outdated ways of handling your most important business applications. Or perhaps you understand that data backup and storage is effective in the cloud, but you’re not convinced about something like accounting software.

In our experience, we see a wide range of common applications that benefit from the cloud’s low cost and high reliability, security, and ease of management. Here are five business applications that we find particularly suited to the cloud, and why. 

1. Data backup, disaster recovery, and data storage.

While you still might want to have some onsite data storage capabilities for quick data recovery, we recommend that you store the majority of your data offsite in the cloud. Cloud data centers offer low-cost storage while also providing you high security, encryption, and fast recovery. Unlike traditional models where it might take a data center up to 48 hours to send you a fully loaded server containing your data, you can often instantaneously access your data after a disaster as long as you have an Internet connection. Plus, your data is automatically and continuously backed up over the Internet, so you don’t need to have your staff creating manual backups.

2. Business productivity software.

In the past, business productivity software was incredibly expensive. For example, the Microsoft Office suite of products (including everything from Microsoft Word to Microsoft Outlook) required an onsite server (or servers) and a set amount of expensive software licenses. You were locked into agreements for a set number of licenses that often lasted years. Now, services like Microsoft Office work similarly to subscription models over the Internet. Your staff pays a low monthly fee to “subscribe” to the latest versions of Microsoft Office. You benefit from no hardware maintenance and only paying for the exact amount of users you have at any given time. Plus, you don’t have to worry about updating your software—that all happens automatically over the Internet.

3. Document management.

Not only does the cloud provide more security for documents but it also offers permission-based, authorized access to documents by employees anytime / anywhere. Because people often work now from a variety of locations (office, home, coffee shops, airports) and through a variety of devices (desktops, laptops, tablets, smartphones), it helps to have your organization’s documents stored, centralized, and accessible in one place. Traditional setups such as onsite servers lead to access problems and headaches maintaining the equipment and managing storage space. Plus, cities can more easily apply record retention schedules to keep their document archives up-to-date through the cloud.

4. Accounting and financial software.

Moving your accounting and financial software to the cloud provides many of the same benefits as stated above: it lowers your costs, eliminates hardware, provides employees anytime / anywhere access, and secures your financial data more effectively. We often see organizations with traditional accounting software struggle with keeping data updated, upgrading the software, and giving people (especially third parties) access. With cloud accounting software, the right people have authorized access and you can limit access as needed. People can access the system from anywhere, and your data is more likely to stay up-to-date in real time. The accounting data is also likely to be backed up more rigorously and routinely by a cloud vendor.

5. Project management software.

Again, you’ll lower costs and maintenance headaches by going to the cloud for project management. But project management software especially works well in the cloud. Think about it. A project often involves a variety of employees in the office, employees offsite, vendors, and other third party contacts. They all need to coordinate with each other and produce results. With more and more people working remotely today, traditional onsite project management software becomes more of a bottleneck with each passing year. If someone cannot access the software without coming into the office, it creates lags in the status of projects and interferes with real-time collaboration. By using one of many great cloud project management software solutions, multiple people can access the software from anywhere, you can set clear permissions for users, and centralize all communication and deliverables concerning a project.


Of course, there are plenty more business applications that work well in the cloud, but these are some of the most common that work best for most organizations. The common themes are that you’ll reduce your overall costs, eliminate on premise hardware that’s expensive to maintain, only pay for the exact number of users using the software, and provide anytime / anywhere access that remains secure and permission-based. With so many benefits, you’ve got nothing to lose and everything to gain by making the switch.

To talk about cloud software benefits in more detail, please contact us.

Thursday, December 18, 2014
Alicia Klemola, Account Manager

Sometimes, you’ve got a special project in mind that requires a significant investment in technology. You might need specialized hardware, software, a mobile app, or other form of technical project expertise. In the past, you may have given the specialized technology vendor a lot of freedom and just assumed they were taking good care of the project. After all, they’re the expert. You’re not. Right?

Actually, there is a lot you can do to mitigate risk that happens when technology vendors are given free reign over a project: going over budget, not meeting deadlines, watching scope creep bloat the project, and ending up with a solution that doesn’t meet your needs.

The way to avoid those risks? It’s all about smart vendor management, and this post provides some tips on how you and your trusted IT staff or vendor can help ensure that using a specialized technology vendor doesn’t break your budget or introduce excessive risk into your organization. 

1. Establish clear expectations and accountability through requirements.

Draft a set of requirements that specifically outlines what the vendor is providing you. What do you expect as a final product? How long will it take? Who will do what? Often, when a project begins it’s hard to figure out what the vendor is actually doing and who your point of contact is on the project. Without clear requirements, you will lack a roadmap, set of expectations, and clear roles for all people involved.

2. Set realistic timelines.

If you bully a vendor enough, they sometimes overpromise a tight deadline just to get the business. Then, they push back your unrealistic timelines once you start paying them, knowing they’re in too deep for you to pull the plug. Work with an experienced project manager, preferably as part of your IT staff or a vendor, who can plan appropriately based on any requirements. An experienced IT project manager can help you figure out if a timeline is realistic based on a variety of factors such as dependencies related to your business or organization that the specialized vendor would not know about.

3. Ensure that people with the right skills are assigned to your project.

It’s not uncommon for vendors to use experienced salespeople to sell you on a product or solution. Then, once they’ve sold you, the vendor assigns junior level and other inexperienced engineers and managers to work on your project. Experienced IT staff or a vendor can help you identify what skill sets are needed for a project, such as having senior engineers closely oversee or even do some of the critical work. It’s a red flag if the vendor is unable to provide critical skilled talent on an important project.

4. Have IT professionals manage the day-to-day details of the specialized vendor.

We are passionate about the concept of vendor management. While organizations can set or oversee some of the high level business requirements, it helps when your IT staff or vendor can oversee the technical work of specialized vendors. For example, when a software vendor needs to integrate its software with your organization’s existing systems or databases, having an independent IT professional stay in communication with the vendor and ensure that technical tasks are followed correctly is essential in reducing errors and delays.

5. Collaborate with the specialized vendor as much as possible.

When it makes sense, collaborate with the specialized vendor instead of just having only their people handle all of the work. When your team is integrated into the vendor’s work, there is more of a chance to understand and oversee what the specialized vendor is doing. Ideally, a non-technical business decision maker and an IT representative from your staff or a vendor will take part in a project. Build in roles and responsibilities into your requirements to ensure that key stakeholders from your organization have a clear involvement in the project.


A hands-off approach to vendor management puts you at risk even if it’s something as simple as buying a computer or router. Why risk even more with complex specialized technology projects? When these five tips are followed, we see that specialized technology vendors do a much more thorough, responsible job at staying in communication with you and following a rigorous project schedule. Plus, you make sure that your staff stays informed and educated about the project, transferring important knowledge into your organization. Remember that vendors often leave after a project, so it’s important to keep as much knowledge of the project in-house as possible.

To talk about vendor management for specialized technology projects in more detail, please contact us.

Thursday, December 11, 2014
John Miller, Senior Consultant

You may often hear the phrase “business driver” when some consultants refer to information technology. It’s an overused phrase and often gets thrown around without meaning a great deal. In the meantime, it’s much easier to think of information technology as extremely tactical, purchased out of bare bones necessity to accomplish basic things like run software, provide employees with computers, and share electronic data. Beyond that, information technology as a “business driver” might sound like inflated rhetoric.

However, there are some important insights for organizations once they unpack the term “business driver” and apply it to information technology. In our work with organizations, we try to bridge the gap between business and technology for non-technical people by showing that many technology decisions should be spearheaded by non-technical decision makers. Of course, it helps to have experienced IT staff or a vendor to suggest what’s possible and how to get it done, but there are many ways that non-technical decision makers can use technology to drive the business. 

1. Save time, money, and resources.

The most common reason that many technology solutions exist is to trim operational costs. By nature, most information technology solutions were designed to create a more efficient way of doing things—from backing up data to sharing meeting notes. Look for areas of your organization where you feel excessive time, money, or other resources are draining your budget. Then explore if technology solutions exist to automate a manual process or reduce hardware that you have to maintain.

2. Mitigate risk.

Through federal and state law, information technology becomes more essential as a way to enhance legally mandated levels of cybersecurity. That includes data backup (both onsite and offsite), encryption, antivirus, firewalls, and any other information technology that helps secure and protect data. Too many data breaches and stolen electronic information along with significant advances in cybersecurity means that your organization needs a certain standard of information technology to mitigate the risk of lawsuits, fines, public anger, and lost business.

3. Enhance employee productivity, mobility, and morale.

Information technology helps your employees work happier and more efficiently. If you can use information technology to free up time such as automating a manual process, then your employees can direct their energies toward more productive tasks. The cloud can help employees access data and information remotely, allowing them the flexibility of working from home or while traveling. And when it’s difficult to compete for talented employees, good technology that enables them to work productively and flexibly means less chance of turnover and losing good workers.

4. Connect separated departments and groups.

It’s not uncommon in some organizations for one department to use their own software or set of servers and another department to use completely separate software and servers. When these groups are supposed to work together and share information, the results of such separation can be disastrous and wasteful. In today’s business and government organizations, cooperation and interdisciplinary projects become more and more frequent in order to accomplish major business goals. Consider centralized email, document management, software, and servers to manage resources from one place, provide a common experience for everyone in the organization, and make sharing information much easier.

5. Accelerate business goals and objectives.

Many business goals and objectives are often set without an organization knowing fully if technology can help or hinder those goals and objectives. Your organization might want to offer a way for people to pay for products and services online. Cities might want a mobile app that allows citizens to report problems and issues such as potholes. Even a website redesign involves a lot of parts and pieces that may lead to disaster or excessive cost if done poorly. An information technology consultant can help you discuss feasibility, cost, and options that include possibilities you may not have known were possible—but you need to be the one who throws out possibilities and see if they can work.


Obviously, you’re not expected to understand technology in technical detail in order to make decisions about it. At the same time, it helps to surround yourself with experienced IT professionals who understand both business drivers and the technology that best helps accomplish specific business goals and objectives. From operational cost savings to empowering your biggest business decisions, using technology can help or hinder your organization depending on how well you integrate it into everything you do.

To talk about the business impact of information technology, please contact us.

| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 |
Contact
Contact a Sophicity Consultant Now To Find Out How We Can Help Reduce Your IT Costs Go
bottom