Podcast

Above Board: Cybersecurity - What Boards Should Know Now

MoFo Perspectives Podcast

01 Dec 2020

In this episode of the Above Board podcast, Morrison & Foerster partner Dave Lynn speaks with Miriam Wugmeister, co-chair of MoFo’s preeminent Global Privacy and Data Security Group, and John Carlin, chair of the firm’s Global Risk and Crisis Management practice group, about directors’ roles in maintaining their companies’ cybersecurity, and mitigating the damage caused by a cybersecurity incident.

In this webcast you’ll hear Miriam and John discuss:

  • A cybersecurity preparedness practice that they identify as being crucial for a company’s management and board of directors;
  • What boards can do to help make certain their company is well-positioned to avoid a cybersecurity incident and sufficiently resilient to overcome one;
  • At what point board members should be informed of a cyber-breach;
  • Whether boards should ever conduct an independent investigation into a cyber-breach;

And more!

Transcript

Speaker: Welcome to MoFo Perspectives, a podcast by Morrison & Foerster, where we share the perspectives of our clients, colleagues, subject matter experts, and lawyers.

David Lynn: Hello, I’m Dave Lynn and I’m a partner at Morrison & Foerster based in the Washington, D.C., office, and I’m pleased to be joined by my colleagues today, Miriam Wugmeister, who is a partner based in New York, and John Carlin, who’s a partner based in Washington, D.C. Miriam, John, thank you for joining me today.

Miriam Wugmeister: Thanks for having us.

John Carlin: Great to do it.

David Lynn: Today, we’re going to focus on the cybersecurity issues that directors should be aware of and the steps that they should take to be prepared for a cybersecurity event. First question I have is, directors have really heard a lot about the topic of cybersecurity over the past few years. It’s certainly been a big focus in the director community and at corporations. What are the key areas that directors should really be particularly focused on today?

John Carlin: One key area way of thinking about it for a board, I think is your role before, during, and after an incident. And really, the most substantive part of your role is before and after in terms of the types of questions that you’re asking before an incident, and then after an incident, in terms of getting a briefing on what occurred and what are opportunities to improve in the future. During an incident, though, I think Miriam and I have both found that we see a significant difference between the way that boards and companies react who have prepared for an incident by essentially doing a tabletop or war game where they’ve gone through with some key executives what their role would be as a board in an actual incident before the incident occurs. And then by using an actual fact pattern and thinking through the escalation, it both clarifies for the management team what types of guidance the board would have and where they’d expect them to make decisions, because in a real incident, you have to make those decisions so quickly. But then also, it tends to clarify for the board, when they live through an incident, the types of questions they should be asking pre-incident in order to improve a company’s ability to be resilient and to respond to an incident.

Miriam Wugmeister: I, a hundred percent, agree. And I think part of what John is saying, too, is by doing a tabletop and by working on these issues before there’s an incident, companies can really start to understand how incidents are going to be escalated to the board, what are the kind of decisions that a board is going to have to be able to make. And also, as John pointed out, that there are a lot of things that the board can do before there’s an incident to try and help the company be in the best position it can be to avoid an incident or, if there is an incident, to be resilient. And some of those are very tactical: asking management about how decisions are being made regarding cost and staffing, asking specific questions about some of the very well-known trends that we see in terms of the way in which threat actors actually are compromising systems, and making sure that there’s a lot of clarity, like we just said, about how things get escalated to the board and what the board’s role is and is not in the context of an incident.

David Lynn: One question that often comes up is when should the board be involved when a cybersecurity event happens, how soon does the board need to be involved? And then once they’re involved, how often should they be briefed and how closely should they be monitoring the situation as it unfolds?

Miriam Wugmeister: I’ll start with that one. Of course, I’m going to be a lawyer and say that depends, Dave. Right? But it does depend on the type of incident and on the severity of the incident. Certainly, what we’ve heard from boards that John and I have spoken to is board members absolutely want to know about an incident from management before it becomes public. Board members really don’t like it when they learn about an incident at an organization from the newspaper or a call from a reporter. That’s definitely a clear boundary. If you know an incident’s going to become public, the board members definitely want to be informed before then. But there are questions about whether or not the full board needs to be informed, whether or not it’s a committee of the board, whether it’s the chairman of the board. And that’s really going to depend also on what kind of committee structure the organization has set up. There’s lots of debate right now about what’s the best way to organize a board, whether or not boards need to have separate risk committees, whether they should have separate cyber committees. And again, that’s going to depend, I think, in large part on the type of organization. John, I don’t know what your thoughts are on that.

John Carlin: So, going back to the tabletop or test idea, I think we often find when we have discussions with boards in this area that—try out sample scenarios from your team. There’s a constant steady state of incident activity. Start with something that is a day-to-day type activity and ask what the—what’s the board’s appetite for hearing about it. Usually for incidents that might be labeled under the incident where plan terminology changes, but, essentially, intermediate or low-level incidents are not briefed to the board as they occur, but instead are briefed out quarterly or more often as a sign of trends that are happening with the company. And it’s only when an incident reaches a certain level of criticality that there’s a notice to the board. And then, exactly as Miriam said, getting the full board’s buy-in to what’s going to depend on the culture and the other ways that boards are briefed is very important to do before you’re in the actual crisis, and what makes cyber so different than other types of crises is the speed of the incident with which decisions need to be made.

John Carlin: And then secondly, the uncertainty. You’re simply not going to have, in most of these cases, fidelity on exactly what happened. And yet, you’re going to need to make company existential-type decisions on your management team. When you’re thinking about the cadence of how you need to be notified, it needs to be realistic that it needs to be at a certain level of severity, or you’re just going to hamstring the team, and all they’ll be doing is notifying you, number one. Number two, that you need a mechanism where, as Miriam was saying, there’s, perhaps it’s often the case, it’s a particular individual, either the chair of the board, the chair of the audit committee, the chair of the technology committee, if there is one, depending on how it’s structured, and there’ll be a conversation with either the CEO or someone else in the C-suite saying we have this incident, here’s what we have now. And a lot of boards, I think, leave it to the discretion of that individual to decide whether or not at that time it needs to be briefed more broadly throughout the board. And then there’s that threshold question. Well, when do we move out of committee, out of individual notices to the full board, and that point it needs to be a pretty severe incident, I think for—to be the whole board.

David Lynn: When a company finds itself in a severe cybersecurity event, how can the board members be useful to the process of—for example, I guess one question comes up from time to time, are there situations where the board or committee of the board should conduct its own independent investigation of the situation? I’d be interested in hearing your thoughts on those topics.

John Carlin: Let’s start on that one. It’s rare. I think the only time that a board would need to start its own independent investigation would be that they’ve lost confidence in their management team who’s otherwise briefing them or responsible for doing the investigation, but instead, what you find boards doing and totally appropriate, would be whoever the outside council is that’s been retained to do the investigation and oversee the work of usually a third-party forensic investigator working with a law firm to determine what happened and where there are opportunities for improvement, is to have a commitment in advance that the findings of that investigation will be briefed to the board. And, if we go back to the three stages what’s the role of the board before, during, and after an incident that’s critical to the board performing its function post incident of determining where there are opportunities for improvement and what changes should be made, so either it doesn’t happen again or that the impact of it occurring again would be mitigated. I think we see some recurring issues and particularly around board structure that often come up post-incident.

Miriam Wugmeister: Well, before we just talk about the board structure, let me just add one other thing, which is, I would say that the issue we see more often than not is that some member of the board is trying to be helpful, but it starts to blur the line between being a board member and being a member of management. And Dave, you could probably talk to this far better than I can, but there are risks if board members become too involved in the day-to-day management of an incident. And so boards, during an incident, are appropriately providing an oversight role, but they’re not supposed to be in the weeds managing and helping to manage the incident. I think, that is sometimes an issue that we see. But going to your other point, John, which is your question about structure of the board after, we definitely are seeing a trend where cyber historically was really handled by the audit committee of many boards, and now boards are taking a look at that and thinking about whether or not the expertise that’s on the audit committee is the right expertise to handle cyber and data security.

Miriam Wugmeister: And we definitely are seeing companies put in place risk committees or cyber committees. And particularly after an event, particularly if it’s a large event, we are seeing companies either stand up new committees or charge existing committees with the task of really understanding what happened, understanding what the remediation plan is, and holding management accountable for achieving that remediation plan. John, I don’t know if you had more to add to that?

John Carlin: No, I think that having that type of discussion now is critical. And then, as part of that, has been boards taking a look at, well, who really has the expertise in this area on the board? And I think one reason why this has been more difficult for some boards as a new area of risk that in many ways is very similar to the other areas of risk that they’re overseeing is what I’ll call the translation issue. That it’s difficult to understand what their technical experts are telling them and that ultimately, they find post incident they didn’t really understand the briefings they’re getting from the Chief Information Officer or from the Chief Information Security Officer. Having one person, at least, on the board who specializes and can help translate is something to think about pre-incident. And then, secondly, making sure that it’s not you, you need to push on the management team if you don’t think you understand what they’re saying in a form that you can convert it into the way that you’re used to analyzing risk. That’s an area you should push on if you feel like you’re not getting that information in a digestible way pre-incident, and often your instincts or your common sense instincts are right, and it can be quite helpful to the management team.

Miriam Wugmeister: I mean, we’re seeing more and more companies now creating dashboards and having specific milestones that are getting reported on because, for a long time, I think particularly on the cyber side, the CISO or the CIO would come to the board and say, “everything is great.” And that obviously is not going to cut it anymore in this day and age, so having clear milestones and clear areas where management is going to be reporting does many things, among them is one, instilling confidence in the board that management really is on top of what’s going on. It allows the board to have a frame of reference, so then, even if they aren’t technical experts, they can understand what it is that’s being presented to them. And three, it allows the board to make sure that they’re not just getting a rosy picture, that they actually are getting into the meat of it without having to have the real technical, detailed knowledge, which shouldn’t be required.

John Carlin: No, it’s a great point. If you are hearing that this can’t happen here, particularly to use a hot trend right now about ransomware-type events, then that’s the time to worry. The more confidence you hear that an incident can’t occur at your company, the more you should be worried because the fact is there is not an internet-connected complex system, like a company of any reasonable scale, that’s safe from a dedicated adversary. And the offense is outstripping defense in this space. This year will be a record year in terms of the number of bad guys, criminal groups, national security groups that hack into companies in order for their financial gain by extorting the company, by stealing information, by encrypting information, by threatening to disrupt servers. What you want here is both the measures that they’re taking to reduce the risk of that happen, but also a clear-eyed assessment of the fact that it may happen, and that, when it happens, here’s what we have in place to respond and make sure you are quite comfortable with that response.

John Carlin: Miriam, I know you’ll agree with me on this. Every incident that we have that reaches board attention, and it is happening at an unprecedented pace, post-incident, there’s a radically different approach to spend when it comes to both defending the system and also towards resilience. They spend on what it would take to get the system back up in place. And if you go talk to your colleagues who’ve lived through one of them, I think everyone will tell you they wish they pushed that earlier. So now’s your chance to take a good look, see where they’re unfunded projects, not guaranteeing that they get the blank check, but make sure that you are adequately pricing risk when making the decision on where you think you’re saving money. And it’s not just saving money. It’s often the conflict. It’s not buying a new tool. It’s the fact that some of the security measures add friction, so make life a little more difficult for the business side of the house. And so the natural tendency is to reduce that friction until you have an incident, and it’s some of those changes that are just key in where we see incident after incident occurring.

Miriam Wugmeister: The other way we see it, John, is companies have lots of tools, but they don’t have the people with the training or the skills to interpret the information that comes off the tool. Companies buy the shiny object, but they don’t invest in the people and the processes to make sure that they understand the readings from those tools. And that actually can lead to bigger problems, because then you have the situation where your tool actually detected a problem and the company missed it, and then that becomes quite difficult to defend. It’s not just a matter of money and buying the new shiny object. It’s also making sure that you have the processes and people in place to be able to understand and utilize those tools that you do put in place.

John Carlin: Such a good point. And the vast majority of these incidents is—they’re really a result of what’s called in the field blocking and tackling. It’s not a question of whether you have something high end, it’s whether you’re executing on what everybody knows are the right steps at this point for a company to be taking. Do you have multifactor authentication enabled? In other words, it’s not just username and a password, but something else to get enabled. Or do you know where your systems are? So inventorying your systems comes up every time almost, that they think they do until you have an incident, and then, suddenly they’re you encrypted and you can’t even figure out what the universe is of systems, particularly in this time of COVID where systems are being retired and rolled out as people try to accommodate people working from home. For the highest-level access account, the so-called administrator accounts, are they carefully controlled and monitored in a way that’s different from all the other accounts?

John Carlin: It’s steps like these that, in the information security space, everyone knows, but usually they’re not executing, and it’s not a competence issue. It’s usually for a reason because of cost, because they’re relying on a system that’s out of date and they’re waiting for that system to be updated before they put the change in. It’s because a high-level executive complains when you put multifactor authentication in. So they say, well, we won’t make them go through that extra step. I don’t know if you could think of some others, Miriam. Those are some common ones that just keep recurring. Where a well-timed board question or two could help make the difference that protects you from a massive cyberattack.

Miriam Wugmeister: And we also hear a lot about it, John, like you were saying friction on the customer side, right? Companies that have either consumer or B2B business, that’s true for both. On the business side, there’s always reluctance to make it harder for customers to reach you or to slow down the registration process. And so those decisions where you balance security versus convenience is very different before an incident and after an incident.  Miraculously, after an incident, everybody decides it’s okay to have a little more friction in order to protect information to them. Those are really important questions for the board to be asking before there’s an incident, not just after.

John Carlin: And I guarantee you, your team is making those tradeoffs. Since that type of risk decision is often where you can add great value as a director, make sure they’re articulating to you what the tradeoffs are so that you can weigh in on the appropriate risk posture for the company.

David Lynn: Great. Well, thank you, Miriam and John, for all of those insights. I really appreciate it.

Speaker: Please make sure to subscribe to the MoFo Perspectives podcast so you don’t miss an episode. If you have any questions about what you heard today or would like more information on this topic, please visit mofo.com/podcasts. Again, that MoFo, M-O-F-O.com/podcasts.

Close
Feedback

Disclaimer

Unsolicited e-mails and information sent to Morrison & Foerster will not be considered confidential, may be disclosed to others pursuant to our Privacy Policy, may not receive a response, and do not create an attorney-client relationship with Morrison & Foerster. If you are not already a client of Morrison & Foerster, do not include any confidential information in this message. Also, please note that our attorneys do not seek to practice law in any jurisdiction in which they are not properly authorized to do so.