We put the IT in city®

CitySmart Blog

Wednesday, April 19, 2017
Mike Smith, Network Infrastructure Consultant

Mike SmithLicking County, a county east of Columbus, Ohio, recently experienced a bad ransomware attack on its IT systems. Ransomware is a specialized virus that encrypts files—making them nearly impossible to access unless you pay criminals a ransom. Cybercriminals use ransomware to extort money in return for unlocking your files. Many organizations pay the ransom despite the FBI and other law enforcement agencies recommending against it.

Luckily, Licking County managed to mostly survive the attack based on implementing some important best practices. Let’s look at the good, bad, and ugly of this situation to extract some important lessons.

The Good

Data backups

The difference between getting crippled and devastated by a ransomware attack versus surviving it relatively unscathed all comes down to data backups. Licking County ended up losing only about one day’s worth of data for most systems. Another county referenced in the article ended up paying a ransom of $2,500 to cybercriminals because they did not invest in data backup.

Activating a plan to shut down the network

To stop the spread of the ransomware, Licking County shut down its network. Clearly, the county had a plan in place and enacted it when the ransomware virus hit. By planning ahead, they were best prepared for what to do to keep the virus contained and to minimize impact.

Rebuilding systems based on highest priority data

As part of its disaster recovery plan, the county rebuilt its systems based on the highest priority data first. The article references data such as “servers that house felony-case tracking for the prosecutor's office and the auditor's property-records database.” Any disaster recovery plan needs to have a clear plan as to how data will be restored—and in what order of priority.

The Bad

Rebuilding systems will take a lot of time

Licking County is a big county and so it needs to reformat about 1,000 computers as part of its rebuild. That takes a lot of time. Even smaller organizations will need to spend significant time rebuilding servers and reformatting computers.

Direct and indirect costs

Directly, the costs of billable IT time and possibly enhancing networking equipment and cyber protection software can present a big hit to your budget. Indirectly, lost productivity wastes expensive employee salaries and potentially delays major projects when time is ticking.

Impacts to citizen service

After a disaster, a crippled government entity will not be able to serve citizens at full capacity. The mission of government gets impacted when ransomware hits. County Commissioner Tim Bubb says, “We have lost a large part of our focus on serving the people of Licking County. What price do you put on that?"

Potentially weak firewall and network connections

A Columbus Dispatch article mentions that the county needs to shore up its “firewall and network connections.” An improperly configured firewall can leave ports open that allow hackers to easily gain access to servers and steal information. Setup of switches, routers, and other networking equipment also impacts security.

Potentially weak passwords

The same article mentions that the county needs to encourage employees to change passwords more frequently. In a recent blog post, we said, “The longer a password is in use, the more likely that hackers will be able to crack it. The more you change passwords, the more difficult you make a hacker’s job.”

The Ugly

911 dispatching affected

An article published in the Newark Advocate the day after the incident stated “...the 911 Center has been operating in manual mode since late Tuesday night. The 911 Center phones and radios work, but dispatchers do not have access to their computers. The public can still call 911 for emergency police, fire or medical response.”

While not completely shut down, any impact to 911 or other critical emergency services can literally affect lives in the wake of a ransomware attack.

Employees click on too many suspicious emails

One of the biggest cybersecurity threats is people. No matter how great your data backups, antivirus, firewalls, and security measures, hackers and cybercriminals still often break into a government entity through people clicking on suspicious websites and email attachments.

Note this paragraph in the Columbus Dispatch story:

Fairfield County started working last year to tighten procedures to guard against the type of cyberattack that occurred in Licking County, said Fairfield County IT Administrator Randy Carter. He said he was dismayed when he sent a test phishing email to county employees in September and more than 25 percent clicked on it. Carter plans to provide training to employees on what emails to avoid.

25 percent! One in four people got fooled by these dangerous emails. Each click on one of these emails opens you up to the threat of a virus or ransomware.

Cybercriminals targeting government more and more

Cyberattacks grow more numerous and targeted. Government entities are ripe for these attacks. That includes cities.

Are you prepared?

  • Like Licking County, do you have data backups to recover from a ransomware attack?
  • Do you have the right network equipment and modernized technology to protect yourself?
  • Are your employees trained about the dangers of clicking on malicious emails and websites?

If you need help protecting yourself from a ransomware attack, reach out to us today.

Tuesday, April 11, 2017
Brandon Bell, Network Infrastructure Consultant

Brandon BellA city had operated for a long time with tape backup and decided to upgrade. City administrators heard from their IT staff that they needed something more reliable than a manual solution reliant upon busy people to both conduct the backup and store it offsite.

Spending a lot of money on a modern complex data backup solution, the city was assured by its IT staff that this automated beast would solve all their problems. Indeed, the data backup worked automatically. In a meeting, IT staff showed city department heads the wonder of the data backup system by retrieving a few PDF documents from the backup data storage. To city council and the public, the city administrator proudly said they had ticked data backup off their list. Problem solved!

One day, a fire tragically tore through most of city hall. The building ruined, city staff needed to relocate to a temporary building until a new city hall was built. But thank goodness—despite all the servers destroyed—that the city could retrieve its data.

Or not. When IT staff attempted to restore the city’s data through its backup, most of the major databases, applications, and data would not restore. A few chunks of data—like some people’s individual documents—were okay. But the city’s most important information was not there.

And so...an expensive backup solution became nearly worthless. Why? Upon further investigation, the city administrator was told that nobody ever tested the data backup. “But...it was an expensive solution,” the city administrator said. “And my IT staff said that it was automated. The data backup solution’s reporting even said it worked.”

Well...it didn’t. And that’s all that mattered when the city administrator had to now explain why this expensive investment failed them after a disaster—and failed to do the exact thing it was supposed to do.

Preventing This Disaster

One aspect of data backup and disaster recovery—testing—is nearly as crucial as simply having data backup at all. No matter what kind of data backup you use, you need to test it. Otherwise, you don’t know that it’s working.

Let’s look more closely at the errors in our city scenario above.

Error #1: Assuming the data backup works.

A data backup solution will often look like it’s doing its job. From manual solutions like tape to more sophisticated automated data backup servers, the data backup application will often indicate that the process is a success or failure. But no matter what the application tells you, you don’t know that it works until you test it.

Error #2: Not testing all the backed up data.

Calling up a few files such as PDFs from the data backup storage is not testing. When a disaster hits, you will need to be fully operational with your databases, software applications, website, email, and documents. For example, will your account system work from a backup copy? When you test, test everything. Simulate what would happen if an actual disaster hit.

Error #3: Develop a plan and document it.

Testing needs to be part of your overall disaster recovery and business continuity plan. The act of testing not only guarantees you will access the data but also allows you to practice how data recovery will work. Who does what? How fast will the data be restored? In what order? Where will you access the recovered data?

You want to run into issues during testing and deal with them in a simulation—rather than after a real disaster.

Uncertain about your data backup solution? Are you testing it at least quarterly? Reach out to us today.

Tuesday, April 04, 2017
Jabari Massey, Network Infrastructure Consultant

Jabari MasseyOn the surface, a coastal city did some correct things to back up its data. The city had a few servers in a physically secure basement room that were well-maintained by IT staff. One of the servers backed up important data. In case a server failed, the backup server would run until the city could replace the original server.

A long time had passed since the city last experienced a hurricane. When a hurricane finally seemed eminent, the city was ordered to evacuate until the massive storm passed. The city manager and IT staff didn’t think much about the servers other than placing them upon concrete blocks in case of flooding. As long as the city implemented its emergency action plan and evacuated everyone safely, the city manager assumed its information technology would remain safe.

After the hurricane passed, city staff returned to find that no massive devastation occurred but they did experience heavy flooding. The IT staff had placed the servers upon concrete blocks as a precautionary measure, but they learned an incredibly hard lesson in hindsight.

Located in a basement room, the servers sat below sea level. Although the rest of city hall experienced moderate flood damage in places, the basement had filled up to dangerously high levels. All of the servers—including the backup server—were rendered unusable by the flooding.

With a sinking feeling, the city manager realized all critical data—including financial, public safety, document management, email, and website data—was gone. The only backup server got destroyed along with the others. It might be easy for the city manager to point some blame in the direction of the IT staff, but it was well-known that he had refused requests to explore other data backup options because of “budget concerns.”

Now, the mayor, city council, the media, and public would be asking questions.

Preventing This Disaster

Sure, the city manager and IT staff made a bad decision to place servers in a basement room below sea level. But their errors go deeper than this poor choice of physical location for the servers.

Let’s look at the errors in the story above.

Error #1: Locating servers in a flood-prone area of your building.

Getting the most obvious error out of the way, it’s clear that the servers needed to reside on an upper floor. In addition, the server room needed to be in a room that mitigates flood risks through preventative measures such as water leak sensors or eliminating areas where water can enter.

Error #2: Lack of offsite data backup.

While locating the servers on a higher floor may have prevented this immediate flooding disaster, it’s still not a full disaster recovery plan. Anything can happen to your technology onsite. To guarantee full recovery of your data after a disaster, you need an offsite data backup component to your emergency plan.

We recommend storing your data offsite in geographically dispersed locations (such as in data centers both on the East and West coasts). Then, even if the worst disaster wipes out your buildings, you will be able to recover and access your data.

Error #3: Lack of technology planning.

The lack of offsite data backup also signifies a larger issue—a lack of planning. The city had developed an emergency plan and used it in the case of the hurricane. But when was the plan developed? When was it last updated? Did it include technology-related scenarios? What was the plan to protect data in case of a disaster?

First, the city needed to update its emergency plan and include technology. That would have addressed technology-related gaps in the city’s data backup, disaster recovery, and business continuity plans. Second, the city needed regular technology planning meetings (at least once a quarter) and ongoing monitoring to ensure that data backups were tested and working. This regular monitoring and planning would help the city adapt to changes (such as new technology, more staff, building changes. etc.) and ensure that the risk of data loss is minimal.


Flooding is one of the most common disasters. It can happen anywhere in the country and devastate a city. Because citizens will rely on your city after severe flooding, you must be operational as fast as possible. That means having access to your data—your website, your documents, and your applications that are essential to operations.

By developing a disaster recovery plan that includes an offsite data backup component, you will lessen the risk of permanent data loss and angry “Why?” or “How?” questions after the fact from council, the public, and others.

Concerned about your data backup and disaster recovery? Reach out to us today.

Tuesday, March 28, 2017
Ryan Warrick, Network Infrastructure Consultant

Ryan WarrickIn recent posts, we’ve talked about disasters at cities that result in permanent data loss, incredible damage to city operations, and city department heads wondering if their job is now at risk—all sadly because of preventable risk. The stories we use to illustrate these disasters—and the lessons learned—are based on a combination of many, many scenarios we’ve witnessed at cities throughout the years.

However, we recently saw a story that’s quite specific to one city and a very public, front page news illustration of some important IT-related lessons. Let’s look at what happened to the City of Miami Beach, Florida in December 2016.

Third Parties Steal $3.6 Million—and No One Notices for Six Months

In a nutshell, unknown third parties stole the account and routing numbers from the city’s banking account. According to the Miami Herald, the criminals “[rerouted] automatic payments intended to pay vendors and other government bills.” The criminals did it for six months and stole $3.6 million before staff in the finance department noticed.

We carefully reviewed the Miami Herald article and the city manager’s report. While this crime is a form of cybersecurity, the situation also includes lessons about IT-related processes and controls that are incredibly important to cities. A few bad practices stick out from our analysis of the report that cities need to avoid.

1. Completely ignoring basic, elementary best practices.

The city of Miami Beach was offered free fraud control tools when they set up the account in 2012—the same kind of fraud control tools that many individual banking customers enjoy. Did the city take advantage of these tools? No. Maybe they had a reason at the time such as wanting to implement their own fraud controls. If so, that never happened.

Cities need to stay aware of and implement important best practices that help mitigate information security risks. In this case, both finance and IT staff needed to say “yes” to such an obvious best practice back in 2012.

2. Using easy-to-steal information as authentication for financial transactions.

Think about how many people in a city can take a quick peek at a check. If third parties could steal city money through only this information, then the city had a security vulnerability that was wide open for people to exploit.

We find that cities also have similar weaknesses in areas such as passwords, unencrypted wireless devices, and website hosting that makes it easy for hackers to exploit security vulnerabilities.

3. Apparent lack of financial data oversight.

In a recent post about data processing, we said, “Experienced IT professionals should monitor everything related to your data processing such as transactions and processing, errors and incorrect information, overrides, unauthorized use of the application (especially when it appears that someone is altering data or ignoring/tampering with processes), reconciliations, and application performance (such as after a power outage or server failure).”

Obviously, finance department staff have an even more important role in monitoring this information too. While online banking is great, it’s unwise for even an individual consumer to not regularly review banking transactions. Great risk was introduced by not reviewing for six months and hoping that everything was okay. Cities need to become more proactive at monitoring and reviewing important aspects of their operations where data changes constantly—from accounts payable to information technology.

4. Lack of modernization.

Many cities often hear the word “modernize” and think of it as “unnecessarily wasting money or time on something new and fancy that we don’t need.” Sure, some solutions might fit that definition. But technology modernization is important especially when your old technologies and processes lead to security vulnerabilities, inefficient operations, and significant liability.

In the case of Miami Beach, the city manager’s report includes many “sudden” modernizations in one fell swoop such as ACH fraud controls and using UPIC (Universal Promotional Identification Code) to avoid sharing confidential banking information. The city manager even notes in the report that “the ACH Fraud Control program already prevented an unauthorized ACH transfer.”

I know we beat this drum a lot. But why do cities wait? Why do cities put off modernizing their technology and processes until a massive crisis hits? We see this “putting off” logic holds true at many cities for data backup, disaster recovery, website hosting, records and document management, email, and aging hardware. In all of these cases, lack of modernization increases the risk of a significant city incident or disaster.


Learn from cities like Miami Beach. Are you sure that fraudsters aren’t currently stealing money from you? Is your technology modernized in such a way that you aren’t headed for a major disaster like permanent data loss?

If you are worried about addressing critical technology aspects of your city before a disaster happens, reach out to us today.

Tuesday, March 21, 2017
Victoria Boyko, Software Development Consultant

Victoria BoykoDespite the perceived importance of ADA-compliant websites, many city websites do not comply with best practices that help disabled people access content. While ADA, W3C, and other organizations provide detailed guidelines and best practices, very few enforceable laws exist to keep cities accountable. Plus, even if a website designer follows all ADA best practices, a city employee may upload content to the city's website that doesn’t meet these requirements.

While some signs exist that the Department of Justice may create enforceable ADA-related website regulations in 2017, it’s not definite at this time. But that doesn’t mean your city should ignore ADA-compliant website best practices.

By making your website ADA-compliant, you:

  • Help extend your website services to disabled people.
  • Improve the overall functionality of your website.
  • Anticipate following future laws and regulations that may be expensive to correct later.

If you haven’t thought about ADA compliance for your website, then where should you start? While existing guidelines cover a lot of technical ground, here are some best practices that should be easy to tackle with the help of your website designer and whoever creates and uploads content to your website.

1. Describe images with text.

Many people just upload an image to a website as quickly and simply as possible. However, there should be an option on the back end of your website to provide alternative text (or “alt text”) for an image. For example, if you place a picture of city hall on your website then the alt text may say “Picture of city hall on a sunny day.” If someone is blind or cannot see very well, they may use a screen reader tool that describes all images on a page. When you fill out the alt text, you make images “readable” and accessible to people with vision problems.

2. Provide alternate ways to access video and audio content.

Videos and audio files (like podcasts) have become more and more embraced by cities. But what if someone can’t see a video? Or what if someone can’t hear the audio? Provide alternate ways for people to access the content. For example:

  • Offer closed-captioning for videos with audio content. Some video services will do this automatically for you (although it’s a good idea to spot check the quality of the closed-captioning) or you can do it manually.
  • Offer transcripts for videos and audio files.
  • In some cases, a summary description may be sufficient for visually-heavy videos with little spoken word or a lack of heavy substance.

3. Provide a clean, simple navigation and website structure.

If your website is a structural mess, then it will be even worse for people with disabilities who try to navigate it with screen readers or keyboards alone. Your website’s information architecture (meaning the way your webpages are structured and organized) needs to be as simple and clean as possible. For example, you wouldn’t want to clutter your homepage with a dozen things about your city’s history while barely mentioning or providing links to your most important city services.

4. Work with your designers to ensure that people can adjust colors and font sizes with ease.

Many disabled people with vision problems often need to adjust the contrast and sizing on their computers to see what’s on their screen. While the design specifications for ensuring ADA compliance are complex, most modern websites allow disabled people to adjust contrast and sizing. If you’re not sure about your city’s website (especially if you haven’t modernized it in a long time), then ask someone with website design experience to help you assess this aspect of accessibility.

5. All content should be accessible by keyboard alone.

Some disabled people cannot use a mouse and click on website content such as buttons or links. They need to rely only on a keyboard to get to it. If you have content on your website inaccessible by keyboard, then make it accessible as soon as possible. You should also consider adding a “skip navigation” link so that keyboard users can skip the often long navigation tabs (the ones seen on every page). That will save those people from wasting a lot of time.

6. Avoid flashing images.

Luckily, most modern websites avoid flashing images because they look tacky. However, if you are tempted to use them then consider that they may cause seizures in some people.

7. Follow writing best practices.

Write simply, clearly, and concisely. This is a good best practice anyway but it also helps disabled people who need information stated as clearly as possible. Rambling text, typos, and bad grammar prevent you from communicating to your audience. Consider hiring a professional writer to write your content if you’re unable to ensure a high writing standard.

8. If you hyperlink text, then make sure it’s descriptive.

“Click here” is not descriptive. “January 5, 2017 City Council Agenda” is descriptive. When disabled people use screen readers, they often look for links to take them to another webpage. Make the text you hyperlink contain a specific description instead of something vague.

9. Offer an alternate version of PDF documents.

Unfortunately, screen readers cannot read PDF documents. If the thought of converting tons of PDF documents to HTML or rich-text format horrifies you, then talk to your IT staff or vendor. You may be able to find a tool that can convert your PDFs to HTML. Then, it’s a matter of going through the PDFs you offer on your website and creating HTML versions of each document.

10. Avoid cutting and pasting pre-formatted content to your website.

When city employees upload content to websites, we often find that they make the mistake of posting pre-formatted content. For example, people may cut and paste content from a Microsoft Word document to the city’s website. The problem? Microsoft Word content contains a lot of HTML code that makes sense when you’re working in Microsoft Word—and not so much sense when you transfer it somewhere else. That’s why what looked great in your word processing software can look awful on your website.

Usually, cutting and pasting into Notepad first (a free application that comes with nearly all computers) and then cutting and pasting the Notepad version into your website’s content management system will remove junk formatting and convert your words into clean, plain text.


Following these best practices will give you a good head start for making your website ADA-compliant. For more detailed best practices, refer to the following resources.

Website Accessibility Under Title II of the ADA

Web Content Accessibility Guidelines (WCAG) 2.0

Need help assessing the ADA compliance of your website? Reach out to us today.

Tuesday, March 14, 2017
John Miller, Senior Consultant

John MillerA small city with two servers also stored many paper documents containing critical information. The city backed up its servers with tape-based data backup which a city employee would take home every week or so to store “offsite” at their house. Many of the paper documents were not replicated electronically, and so these paper documents were the only versions in existence.

One night, a fire began that destroyed nearly all the building before firefighters arrived at the blaze. Fire alarms went off but no fire suppression occurred until the fire department showed up.

Assessing the damage the next morning, the city discovered that its paper documents and servers were completely destroyed. With the paper a total loss, the city decided to recover the server data from the tape backups. However, after a two-day attempt at trying to restore the data, the city could only retrieve about 10% of it. Many of the tape backups hadn’t been tested and the city didn’t realize that the backups weren’t running properly for a long time.

As a result, operations ground to a halt and the city found itself in dire trouble. They lost their accounting and billing systems along with many public records and documents. So many critical operational records were lost related to accounting, taxpayers, and businesses. The public outcry had only yet to begin after the admission of data lost—and why the city had not properly backed that data up.

Preventing This Disaster

A fire can happen to any city at any time. Is your city prepared? For such a common disaster, we find that many cities do not have disaster recovery plans that account for a simple yet deadly fire.

Let’s look at the errors in the story above.

Error #1: Using paper as the only copy of important documents.

In today’s electronic information age, relying only on paper for important documents is way too risky. A simple fire can wipe out paper in a matter of minutes. Paper also fails in a flood, tornado, or other natural disaster. Any paper-based documents that are critical to your city need to be scanned electronically and backed up offsite to ensure they are not lost.

Error #2: Poor offsite data backup plan in place.

Relying on a city employee to take tapes offsite every week to their house is not a sure-fire plan. First, these tapes were not tested on a regular basis. When the city actually needed to restore data, most of the tapes failed. Second, too many security and liability risks exist when a city relies on an employee to manually collect backup tapes and store them in a private home. What happens if the employee is negligent or disgruntled? What if they forget one week to take the backups home?

Error #3: Lack of appropriate fire suppression for a server room.

Any room that stores servers needs best-of-breed fire suppression. Fire alarms alone are inadequate. Most data centers feature fire suppression technology that helps eliminate or reduce the severity of a fire. If your city decides to host its own servers, then you need to explore fire suppression options beyond an alarm.

Error #4: Lack of an overall disaster recovery plan.

The city clearly did not think through the consequences of a disaster. Otherwise, it would have identified critical information—such as its paper documents—and planned for a worst-case scenario such as a fire. This plan would include:

  • Identifying which data is most critical and cannot be lost.
  • Estimating the maximum amount of acceptable downtime before restoring city operations.
  • Detailing how the city will get up and running after a disaster.
  • Outlining contingency plans while the data is being restored.
  • Ensuring that any data backups are tested regularly.

While large disasters like tornados can seem more improbable and less likely, cities need to keep in mind that disasters also include more common scenarios like fires. A fire can wipe out critical information quickly. Your disaster recovery plan needs to account for both paper-based and electronic information—ensuring that you can recover your most critical information soon after a fire or other common disaster.

Questions about your city’s ability to protect and recover your most important information after a fire? Reach out to us today.

Wednesday, March 08, 2017
Dave Mims, CEO

Dave MimsWe know. It’s the federal government. Yet, cybersecurity legislative trends show that security risks within government—whether it’s federal, state, or local—are being addressed because they affect national security and the privacy of citizens. There’s an incentive for Congress to help your city shore up its cybersecurity.

The federal bill is called the State Cyber Resiliency Act and it’s in the proposal stage. As a bipartisan bill, it has a higher chance of making it through the House and Senate depending on Congressional priorities. Matt Zone, President of the National League of Cities is quoted as saying:

“Cities manage substantial amounts of sensitive data, including data on vital infrastructure and public safety systems. It should come as no surprise that cities are increasingly targets for cyberattacks from sophisticated hackers. Cities need federal support to provide local governments with the tools and resources needed to protect their citizens and serve them best."

The idea is that FEMA will administer grants for state, local, and tribal governments. Particulars about the grants are not clear at the moment as the text of the bill has not yet been submitted.

We’ve been concerned about city cybersecurity for a long time, and it’s reassuring to us that lawmakers want to help cities address this issue. An article from FCW points out some drivers behind this bill:

  • “[State, local, and tribal governments] typically devote less than two percent of their IT budget to cybersecurity.”
  • “…in 2015, 50 percent of state and local governments had six or more cyber breaches within the last two years.”

We’ll be tracking this bill (S.516) after its introduction last week. Stay tuned!

Wednesday, March 08, 2017
Dave Mims, CEO

Dave MimsSB 138, introduced in the Arkansas State Legislature on January 17, 2017, was passed in the Arkansas Senate on March 6 and now proceeds to the House. Why is SB 138 so important? And why are we, a municipal-focused technology company, pointing it out?

The bill states that an Arkansas municipal charter can get revoked (yes, revoked!) if the Legislative Joint Auditing Committee finds two incidents of non-compliance with accounting procedures in a three-year period. Revoking a charter is serious, rare, and extreme. That’s pretty much the end of your municipality.

The Arkansas Legislative Audit (ALA) includes both general controls and application controls around information systems. For municipalities, accounting systems are often the most important information system they oversee.

According to the ALA:

  • General Controls are mechanisms established to provide reasonable assurance that the information technology in use by an entity operates as intended to produce properly authorized, reliable data and that the entity is in compliance with applicable laws and regulations.”
  • Application Controls relate to the transactions and data for each computer-based automation system; they are, therefore, specific to each application. Application controls are designed to ensure the completeness and accuracy of accounting records and the validity of entries made.”

While this bill has yet to pass the Arkansas House and get signed into law, its appearance and passage by the Arkansas Senate is a sign that municipalities are being held more—not less—accountable for information security, compliance, and best practices related to information technology.

Even if you’re not an Arkansas municipality, you should still get ahead of the curve. Federal and state laws that urge stronger technology-related compliance and best practices seem inevitable.

In the meantime, you can track the Arkansas bill and read up on the different components of what the ALA examines in its audit.

Concerned about the state of your information security or compliance with the law? Reach out to us today.

Tuesday, February 28, 2017
Brian Ocfemia, Technical Account Manager

Brian OcfemiaA city had relied on an old, aging email server for 10 years. Purchased in 2007, the email server often froze up and hit storage limits constantly. With the excuse of “budget,” the city did not want to invest in a new server despite these issues.

As a result, employees were often forced to delete emails in order to free up space. A city policy said the employees needed to keep “important” emails. However, it was unclear what “important” meant and the policy only loosely defined how the employees should retain them. Some employees used flash drives, some used external hard drives, and some even transferred files onto personal laptops.

One day, an outside investigation began that concerned a city department. Allegedly, funds may have been stolen and investigators wanted to get to the bottom of what happened. Suddenly, all eyes were on the city as word got out to the media.

The media made several FOIA requests to see emails related to the city department under investigation. Once the city clerk began trying to carry out the requests, she hit a wall. Not sure who kept what, she began to fear that key emails were deleted. Sending out requests to city employees in that department, the city clerk received uncertain replies about who had the specific emails.

Within days, she realized the city may not have been able to fulfill the FOIA request—even with a delay. The crushing realization settled in that emails the city was required to keep by law may have disappeared. Once the media suspected this happened, they began reporting on the city in a negative light—casting suspicion over the city in the local paper. The stories spread to various other papers around the state. Investigators also noted the serious nature of these missing emails and began to talk of misdemeanors, fines, penalties, lawsuits, and even possible prosecution for employees who possibly destroyed records.

Preventing This Disaster

Even for FOIA-related circumstances less serious than this situation, cities can feel painful repercussions when retrieving emails that are public records. Delays, excessive hours consumed searching for emails, storage limitations, and uncertainty about locating emails all increase your risk of liability. Let’s look at some errors in our story that the city committed.

Error #1: Relying on an old, aging email server.

The city thought it maximized its original email server investment. But holding onto an aging server presents too many problems that impact the accessibility and security of the information you store on it.

  • Cost: It’s expensive to maintain the hardware and software on a server that breaks down a lot, fails to operate at full capacity, and often isn’t supported by the hardware and software vendors any longer.
  • Threat of Server Failure: Whether you have data backup or not, a server failure is disruptive to your operations. Eventually, you will have to buy a new (unbudgeted) server if it fails.
  • Risk of a Data Breach: Older servers are less secure because vendors often stop providing security patches and updates after a specific period of time.

Error #2: Ignoring email storage limits.

Hitting email storage limits is no excuse for not following state retention laws. Today, many cloud email options exist that provide more than enough email storage space for an affordable price. Employees should never have to worry about deleting important emails or storing them in a separate location just because of email storage caps.

Error #3: Relying on employees to manually archive and retain emails.

This city lacked policies and procedures to ensure proper records retention—and they passed along their lack of problem solving to employees. It’s not a good idea to rely on employees to manually store emails in a consistent, legal way. Most employees have the best intentions—but they get busy, forgetful, or overwhelmed by their roles and responsibilities. They are not necessarily going to retain those emails in the most secure, consistent way.

Error #4: Following a weakly enforced policy not aligned with records retentions laws.

State records retentions laws specifically note how emails (and other public records) must be archived, retained, accessed, and deleted. Modern email servers can automate much of this process to align with laws. This city clearly needed to leverage technology more to help them automate the records retention process. Too many steps were reliant on manual, uncertain processes.


While it’s less likely that a scandal or investigation will happen at your city, it’s not impossible. On whatever level you respond to FOIA requests, it’s your legal duty to provide the information requested. If you can’t, then you’re asking for trouble.

Questions about your ability to respond to a FOIA request? Reach out to us today.

Tuesday, February 21, 2017
Nathan Eisner, COO

Nathan EisnerWhen is offsite data backup not offsite data backup? The following story offers an example—and a warning—to cities.

A city was already backing up its data onsite using an extra server. If the server failed at city hall, the other one would take over to restore the city’s data. However, some department heads urged the city to also consider an offsite data backup plan in case of a major disaster. The city manager researched some options and brought in a few IT experts to talk about possible solutions.

After some outside IT experts reinforced and reiterated the idea of creating both an onsite and offsite data backup plan, the city took a shortcut. The city manager didn’t like the idea of sending data off to a data center. He viewed it as unnecessarily expensive. Plus, he wanted control—to “see” the data when he wished. And so the city nixed the idea of offsite data backup located far away from the city.

As a result, the city worked around these parameters to build an “offsite” data backup plan. Working with their local IT vendor, the city set up a backup server in a building they owned located just down the block from city hall. The city manager argued that this building was separate from the city hall building and, thus, “offsite.” If something destroyed city hall, this server would contain all their data. Problem solved.

Or was it?

One day, a huge EF3 tornado descended upon the city. With winds upward of 150 miles per hour, the tornado destroyed many buildings in a swath of downtown. As the city assessed the damage, they discovered that the tornado destroyed not only city hall but also all buildings on that block—including the “offsite” building that stored the city’s backed up data.

With its data permanently lost, the city found itself at a crippling disadvantage at the very moment when citizens needed city hall and public safety operating at full capacity as soon as possible after the disaster. And even beyond the disaster, the city would have to deal with permanent data loss affecting its operations for a long, long time.

Preventing This Disaster

Does this scenario seem unlikely? That’s what all cities, businesses, organizations, and people often think...until after the disaster strikes. With increasing numbers of tornadoes each year in the United States that grow bigger and more devastating, it’s not unlikely that your city may face this threat—or any other similar threat.

Let’s look at the errors in our story and how your city can avoid them.

Error #1: The city’s definition of “offsite” is not really offsite.

Offsite does not mean down the block. It does not even mean two blocks away. True offsite data backup means many many miles away. When your data is stored in a geographic location far away from your city, it’s likelier to be protected from a localized disaster such as a tornado.

We often recommend that you send offsite data to at least two data centers (for example, one on the East Coast and one on the West Coast). It takes some time to set up the technology and the automated data transference to these data centers. But once set up, the offsite data backup runs without the city having to do much of anything. And if a city block is destroyed, your data is safe and accessible from multiple data centers. Your city can start operating within hours of the disaster while you are in the process of ordering new servers.

Error #2: An improper risk assessment focused too much on cost instead of the cost of a disaster.

Sure, it might be cheaper to set up another server in a building down the block. It’s also cheaper to buy health insurance with high deductibles that don’t cover serious medical conditions. In each case, the costs are astronomical when a disaster hits. Cheaper isn’t better and it’s a poor tool to judge a data backup solution’s ability to mitigate risk.

What’s the cost of losing your data? How will your community be impacted if all city records are lost? That’s the cost you should assess. From there, you can make a better case for investing in a disaster recovery solution that mitigates risks by storing data in a geographical location far from your city.

Error #3: A need to “see” the data and keep it close.

An ability to “see” and be near where your data is stored doesn’t mean it’s more secure. A server inside your city can lack the most basic security protection and be more open to hackers than your offsite data backup locked down with the highest security standards in a data center far away. Focus on security and an ability to recover from a disaster, not proximity to your data.

Error #4: A lack of a disaster recovery plan.

Clearly, this city did not think through the consequences of a disaster. They didn’t think through scenarios such as a tornado that can affect a wide area. Not prepared for a probable worst-case scenario, the city found itself completely without its data or a plan if it lost its data. Instead, it assumed that a disaster destroying both buildings was so unlikely that they didn’t have to worry.


For cities, a disaster recovery plan needs to include proper offsite data backup. We recommend that any offsite data backup plan considers:

  • A minimum of daily backups sent offsite.
  • Sending those backups to a data center in a distant geographic location.
  • A minimum of quarterly testing to ensure that your data backups are working.

Questions about your offsite data backup and disaster recovery plan? Reach out to us today.

| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 | 35 | 36 | 37 | 38 | 39 | 40 | 41 | 42 | 43 | 44 | 45 | 46 | 47 | 48 | 49 | 50 | 51 | 52 | 53 | 54 | 55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 |