How Massey Services Came Back From a Cybersecurity Attack - The Edge from the National Association of Landscape Professionals

We recently updated our Privacy Policy. By continuing to use this website, you acknowledge that our revised Privacy Policy applies.

How Massey Services Came Back From a Cybersecurity Attack

The possibility of a cybersecurity attack is probably not one of your number one concerns when you’re running your business. Labor shortages, rising material costs and demanding clients are all pressing matters that keep you occupied.

Even if you have a dedicated IT team, this doesn’t automatically eliminate the chance of a cybersecurity attack. Adam Scheinberg, VP of information technology at Massey Services, based in Orlando, Florida, shares what happened when their company experienced a cybersecurity attack.

To set the stage, Scheinberg says their IT team was spread thin with a wide scope of responsibilities. They would run antivirus software monthly, and they required their employees to use complex passwords that changed every 90 to 180 days.

On Friday, Dec. 6, 2019, at 6:20 a.m. Scheinberg received a call from his systems architect who said, “We’re getting cryptoed right now.”

“Now in 2019, crypto did not mean cryptocurrency, it didn’t mean Bitcoin,” Scheinberg says. “It meant there’s ransomware encrypting the data on our network. Now, although we were not ready for this, we had discussed what we might do if something like this happened. So, I gave the command, ‘Shut it all down now.’”

They started pulling power cables to get their stuff offline as fast as possible. Scheinberg says this was the first time in his 20 years with Massey that they took their entire network offline unintentionally.

Initial Assessment and Response

Scheinberg says the number of alerts at the time created enough of an unusual pattern to cause the systems architect to see what was happening.

As they assessed the situation, they found a ransom note on every affected machine and the hackers claimed they had stolen around 50 gigabytes of data. The ransom was seven figures, payable in cryptocurrency. Scheinberg says they did have a conversation about whether they should pay the ransom or not. He says while the hackers would have given the keys to decrypt their data, he knew that data will also be sold elsewhere on the dark web.

“They had administrative access, including to our databases, and our active directory, which is all the authentication across our network,” Scheinberg. “So, the full extent of the damage was unknown.”

Scheinberg gathered his team and laid down some guiding principals as he knew this was not something they could fix in an afternoon. They prioritized their customers, team members and data in that order to help them make decisions.

He acknowledged they were two weeks away from Christmas vacation so they staggered time off so employees could spend time with their families and not get burnt out.

“You are not unemployed, and you are not going to be,” Scheinberg says. “I know you feel like we failed, and you might worry about your job. But let me turn the tables, you are now the most important part of this company. There is nobody more important than the technology team right now.”

Their plan was to assess the damage, rely on their expert partners to handle the forensics of what happened and overcommunicate with the team about their current status. Scheinberg says they decided also rebuild the system sparing no expense to make sure an attack like this wouldn’t happen again.

He says one of the key takeaways they learned after this attack was you will rarely regret acting, but you will often regret not acting fast enough.

“In other words, a good decision now is better than a better decision later,” he says. “Time is of the essence so instead of being paralyzed by fear, we will act.”

Moving Forward

As they appraised the damage, they found their backups had been hit as well, meaning they had no data. At this point, Scheinberg was wondering if they could even function as a company moving forward. They called their CRM vendor to see what was the most recent copy they had of Massey’s database and they had a version on file from six months ago.

After going from having no data to data from six months ago, Scheinberg asked his team to look at every server they had that they’ve ever run a copy on to see if they had any more recent version on file. They ended up finding a copy that was three weeks old. While it wasn’t an ideal situation, it also was no longer an extinction-level event.

Scheinberg says how you react and how you discuss things like this with the team matters.

“While we’re panicking, if we relay a sense of panic to our team, we will lose people, we will lose faith,” he says. “Even if we get back on our feet, we will have people who think that we’re incompetent. We will have people who decide not to show up tomorrow. We will cause harm. But if we talk about things differently, ‘we just saved the company,’ then it lands differently.”

In the meantime, Massey’s field crews were still working that morning because their iPads were loaded with data from the night before, but Scheinberg says they were running out of time. As the tech team continued to try to find a more recent copy of their data, Scheinberg asked how they knew the backup data was gone.

It turned out that while the software that reads the backups said nothing was there, when they pulled up the actual hard drives, they found the data was still there. Scheinberg says it’s like if there was an encyclopedia and the index was missing, so they said the whole book must be empty.

“What we ended up doing was saying, ‘Forget the index, just turn a few more pages in, is there data there?’” he says. “It’s going to be painful, but we can recreate the index because we have the book and that is exactly what happened. Our backups were all there.”

Thanks to this, Massey had only lost six minutes of data.

By that afternoon, they decided to move all the data to the cloud. On day 2 after the attack, they had a basic understanding of where they stood and now had to make decisions like what to name their servers and where to put them in real-time. Each department had competing priorities on what systems they needed back up and running.

Using their agreed-upon order of priorities, they started working to restore critical systems, so they could be partially operational. By Sunday, they brought their CRM back online and they had working iPads.

Because they had to assume every server was compromised, they had to collect every single Windows machine from the entire company spread across multiple states. They reset everyone’s passwords network-wide and enabled threat protection for emails.

How Did It Happen?

For the forensic trail of how this attack happened, Massey contacted their security partner. They hired a negotiator to talk with the hackers on the dark web. The hackers boasted about how exactly they were able to steal Massey’s data, so they learned a lot. The forensic partners were also able to reconstruct what exactly happened.

In May 2019, a user received a password-protected ZIP file in an email with the password listed. The ZIP file contained a .doc.vbs extension. A .vbs file is a visual basic script that can change the system. When the user opened this document, the script ran, allowing it to read the cached credentials that were stored on the computer.

The script also looked for anyone else who had logged into the computer and gained the local administrator password. This allowed it to move horizontally and access other computers with the cached local administrator credentials. It checked those other computers until it found the credentials for full access to the entire network, allowing it to move to the servers.

“They were smart enough to find a server that is just unexciting,” Scheinberg says. “It just stores security certificates, or it just controls authentication to Wi-Fi or something that we wouldn’t use on a regular basis. Certainly, it’s not going to be the server that has everybody’s files on it or the server that runs our databases that we watch very closely.”

The hackers planted a vulnerability in the server for remote access, which allows someone to connect to their network remotely. After placing the remote access, they sold this access to a second person in October. The second individual was the one who spent a few weeks in November sending all their data outside their network before placing the ransom demand on Dec. 6.

Massey happened to get lucky and the data the hackers stole was mostly marketing and PR data because the folder was labeled ‘customer data.’

“What did they get?” Scheinberg says. “They got stuff that we were very comfortable sharing. What didn’t they get? They didn’t get our database. They didn’t get our customer information because that’s way too big.”

However, there was some data stolen through personal files that had 1,100 Social Security numbers of their team members and 800 driver’s license numbers. The individuals with the compromised information were notified and given credit protection. Scheinberg says, in this case, it was not a weakness of their systems but a weakness of their people.

“Not every breach is this level where someone owns your entire network, all your data, all your authentication across the board,” Scheinberg says. “But I bet you somebody uses their work email address for their gym membership and their gym has been compromised. Or their Netflix that they share with other people. Or they use that same password for all their other accounts and that password has been breached somewhere else and now someone knows I can associate this person with this password.”

Scheinberg says they implemented a number of changes after this event, including backing up their backups and conducting phishing training with their employees. Those who fail are enrolled in further training.

“Don’t build not to get breached,” he says. “Build so that when you are breached, you’re ready and the damage is limited.”

Jill Odom

Jill Odom is the senior content manager for NALP.