Maritime Cybersecurity – Context Matters

Jul 26, 2023 | Technology, Cybersecurity & Risk Management

The maritime sector can be described in terms of two very basic “mission statements” or goals. The first involves the movement of people and goods to their intended destination so that they arrive on time, in acceptable condition (i.e. safely), and for a reasonable cost.  This applies to industries like the cruise industry, commercial shipping, ferry services, and the like. The second is to deliver the capacity to do something to the right location in time and for a reasonable cost. This second mission statement applies to situations aligned with military action, law enforcement, exploration, fishing, and so on. Between both statements, we cover much of the waterfront with respect to maritime operations. 

In an article published recently in Maritime Executive, I put forward that we needed to take this kind of approach within the IT Security field. This concept can be expanded upon when looking at making security an attribute of a system or product, and not something just “bolted on.” 

When designing something, it should obviously have some purpose in mind. What makes that product or service so unique? Oddly enough, we’re not talking about security at this point. We’re talking business and value proposition. This is important because we don’t design things, be they products or services, because they are the most secure – unless you’re selling security. Sonobuoys detect sound, whale trackers track whales, and so forth. 

When considering how to apply security within the design process, you should be able to answer four questions: 

  1. Does the security posture support the functionality of what I am building? 
  2. Does the security posture have a positive impact on the value proposition of what I am building? 
  3. Does the security posture address reasonably foreseeable risks? 
  4. Can I trust the claims being made by the security design?

The first question is often looked at incompletely. Many security analysts will look at the aspects of confidentiality – preventing unauthorized access, integrity – trustworthiness of data and processes, and availability – services on demand in usable form. But what happens when the functionality of a system begins to operate in a contested (i.e. conflict) or complex (i.e. multiple factors leading to less predictable influences)? Availability may not be enough. We may need to look less at basic availability and begin to factor in cyber resiliency goals like anticipation, the ability to withstand impacts, recovery, and adaptation.

 Mariner-Maritime-cybersecurity-Floating-wind-turbines-installed-in-sea.jpg

Consider a remote system that is difficult to maintain, such as an offshore wind farm. One approach may be to attempt to pump data through a satellite connection to a remote operating station where it can be monitored. The security practitioner, in this case, needs to move beyond the issues of confidentiality (i.e. people having unauthorized access to the data) to perhaps more critical issues like the data being available to the monitoring site or being trustworthy. They would then need to look at being able to anticipate difficult conditions (interference due to weather, etc.), take steps that can address those conditions, the ability to recover if the signal is lost, and ultimately to identify the cause of the loss and present information useful to preventing future occurrences. In this case, the security analyst cannot simply look towards confidentiality-based models but needs to preoccupy at least some of the thinking with the robustness of the system, its resilience, and potentially even levels of redundancy to maintain the functionality of the system. This is not just a security issue; this is where security and design engineers need to let go of “who’s first” to work on identifying the challenge and finding workable solutions. 

The reason for this is very simple. You can design or build a brilliant product or service, but these things are known as much for their failures as their successes. These can detract from or distract from the value proposition of the product or service, something that no company that has made significant investments wants to see. 

There are those that argue that security only functions in the loss-reduction space. This may be true for “bolt on” security approaches. I would argue that it is not true when you look at security as an attribute of the system and not simply a compliance-rated or certification-driven exercise; although these are important in various forms of approval and acceptance activities. The degree of trustworthiness in the various claims made by the product can, and should, be supported by security functionality or other design considerations that accomplish the same end. 

Context becomes important here. Best practices, standards, and similar forms of guidance have two attributes that need to be managed. First, they are written for a relatively broad audience. As a result, they can omit or fail to address issues that are particular to your environment. Second, they are written with specific contexts in mind. Can you safely assume that the contexts that it has been written for adequately reflect your own operational and infrastructure risk to blindly trust them? Frankly, many are doing exactly that. The third consideration is that these are published documents. People need to become aware that publicly published documents like these are available to both those that want to protect their infrastructure but also to the adversaries that may want to disrupt it. Again, context comes into play here. An individual’s personal pleasure craft faces a very different level of interest than a warship. 

And this is where the final aspect comes into play—trustworthiness. Three considerations come to mind in this situation. First, there is the strength of the control (in this case, the design process). Simply picking up the best practice and applying it without any support or surrounding analysis is, frankly, lazy. Where the analyses and assessments are done to understand operations, infrastructure, value propositions, and so on, then the strength of that output (in terms of its ability to address reasonably foreseeable risks) goes up significantly. The second is the level of rigour that is applied in that process. While it’s good to have a solid plan, how that plan is executed is just as important. Finally, what was the environment in which that plan was executed? Was it free and clear of influences that could detract from the work? 

As we look at cybersecurity in the maritime space, we may want to think about the level of rigour that we are applying. If we are truly looking at protecting important systems, such as the safety-critical systems, then we should be applying a comparable level of rigour and forethought in the cyber security designs and not simply relying on general principles and practices as anything more than for what they were intended—to be a guide. 


Allan McDougall, MA BMASc PCIP CMAS CISSP CPP PSP CMSP
Senior Security Analyst
Let’s Connect

Categories

Share This