How Will We Protect American Infrastructure from Cyberattacks

Infrastructure — it’s one of those words we think we understand, but it can be a hard concept to wrap our brains around. We may vaguely imagine electrical grids or railroads, but infrastructure also includes many other services that are essential for keeping our homes, schools and businesses thriving. It includes roads and transportation, telecommunications networks, water and sewage systems, and electricity. And today, much of it is connected to the internet.

“We’re all connected so deeply through the internet in so many different ways, whether we realize it or not,” said Jamie Winterton, director of strategy for Arizona State University’s Global Security Initiative (GSI) “Having your credit card number stolen by an online thief would obviously be a terrible thing. But if the water to your home became unsafe because it was tampered with, or if the power was out and it’s summer in Arizona, or it’s winter in the Northeast, this is where that coupling of the internet to our lives becomes very direct, affecting not just our quality of life, but our life source.”

As the Colonial Pipeline hack and subsequent shutdown reminded us so recently, our infrastructure’s digital connectedness — while bringing benefits like convenience, better monitoring and remote problem-solving — leaves it vulnerable to cyberattacks.

As the Biden administration looks to implement the American Jobs Plan, which includes expanding U.S. infrastructure, cybersecurity needs to be a key consideration to prevent even more costly and dangerous attacks.

ASU is home to a bevy of experts on cybersecurity — in fields from computer science and law to business and humanities — who come together in order to understand and find solutions to this complex, far-reaching problem.

What’s my incentive?

Jamie Winterton is the director of strategy for ASU’s Global Security Initiative.

Winterton studies the role of incentives in cybersecurity and how policy can affect those incentives.

“The federal laws that address computer fraud were invented in 1984,” she said. “Computers and the internet have changed considerably since then, but policy has had a really hard time keeping up.”

One result of this lag is the difficulty in figuring out the responsible party. For example, if a company experienced a large data breach, is it the fault of the company, the software provider, the person who didn’t patch the system or the CFO who didn’t fund security adequately?

Compounding this confusion is the fact that cybersecurity is spread out over multiple federal agencies. In this example, the overlaps and gaps between them would make it unclear who in government should oversee investigation into the data breach.

Yet another vague area is pinpointing the impact of the data breach, such as how much it cost the companies and individuals involved.

“You know exactly the cost if you total your car — it’s the worth of your car. But if you lose 145 million credit records, like Equifax did, what’s the dollar amount that’s assigned to that? We really don’t know,” Winterton said.

The lack of clear and up-to-date policy, she argues, ultimately means that companies may have a greater incentive to buy a data breach insurance policy than to be proactive about cybersecurity. This leaves individuals’ data inadequately protected.

Individuals may also lack proper incentives to make smart cybersecurity moves. That’s because messaging to the public tends to hyperfocus on all of the dangers and neglect practical advice, leaving people feeling overwhelmed.

“I would always ask people, ‘What’s your biggest concern?’ And the answer I got most frequently was, ‘I don’t know. I know I should be worried, but I don’t even know what to worry about,’” Winterton said. “And in that case, as technologists we’ve failed. We haven’t done a good job of helping them understand what their real threats are and the steps that they can take.

“If we rethink the laws and policies around cybersecurity and assess what effects they’re actually having, then I think we’ll start to see how our incentives often undermine security. Then, we can figure out how to change them to make people, companies and the nation safer.”

Kidnapping valuable data

Adam Doupé is the acting director of the Global Security Initiative’s Center for Cybersecurity and Digital Forensics and an associate professor in the School of Computing, Informatics, and Decision Systems Engineering.

Imagine you decide to become a neighborhood burglar. Your first step would be to go around the block, trying a bunch of front doors to see which are left unlocked.

“You could run around and do maybe a hundred doors in a day, but you’re limited by geography and physics in terms of how many you could potentially test,” said Adam Doupé. “Whereas once you attach a computer to the internet, literally anybody on Earth could say, ‘I’m going to jiggle the front door of a billion devices on the internet,’ and it takes about a half hour.”

Doupé is the acting director of the Global Security Initiative’s Center for Cybersecurity and Digital Forensics and an associate professor in the School of Computing, Informatics, and Decision Systems Engineering. The center’s aim is to keep users safe, whether they’re working with an iPhone, a browser, an electrical grid or an oil pipeline.

Critical infrastructure presents a unique cybersecurity problem. Because it’s critical, there’s a tendency to avoid software security updates since the update could potentially mess up other parts of the system. Many critical infrastructure companies don’t have procedures to check that a patch won’t interfere with the system’s function, Doupé said.

Malicious hackers will often try to exploit this dilemma by infecting machines before the fix for a known vulnerability is applied to everyone.

Ransomware is a type of cyberattack in which hackers take data hostage by encrypting it, only releasing it when the data’s owner pays a ransom. This type of attack was behind the Colonial Pipeline temporary shutdown, and in general is becoming more common.

“One of the risks in my mind is we get so focused on just preventing this specific incident from happening again, instead of trying to identify a root cause and apply that to all the critical infrastructure that we have,” he said.

It used to be that ransomware was used against random people and earned the criminal a smaller sum of a few hundred dollars. Now, however, malicious hackers are doing their research and attacking valuable data that they can hold for millions in ransom money.

“It’s almost like targeting and kidnapping the children of billionaires,” Doupé said.

Organizations are often told that the best way to protect themselves against ransomware attacks is to have backups of data that aren’t connected to their machines. The problem is that they don’t test how quickly they can restore their system from their backups.

“What we’ve seen is if it takes you a month to get restored, you’re going to pay the money for the ransomware, even if you have backups,” Doupé said.

Sometimes companies think they can relax on security if they isolate their computer systems from the public internet and other unsecured systems, which is called air-gapping. However, criminals can still infiltrate air-gapped networks using innovative methods like infected USB drives, nearby mobile phones and even audio frequencies undetectable to human ears.

It’s a serious problem with sometimes deadly consequences, such as a hospital being suddenly unavailable during a medical emergency because it was dealing with an attack. And as companies continue paying these ransoms, it only inspires more cybercriminals to demand them.

Doupé and others are working on solutions so that cybersecurity isn’t such a hassle for large companies.

He’s currently involved in a project with the Defense Advanced Research Projects Agency on “assured micro-patching.” They aim to create the smallest patch possible (changing as little as possible) and use mathematical proofs to guarantee that a system will still work after the patch is deployed.

As the U.S. expands its infrastructure under the American Jobs Plan, it will be critical to build cybersecurity into that infrastructure from the beginning rather than trying to bolt it on at the end, Doupé argues.

Choose wisely … with the help of AI

Tiffany Bao is an assistant professor in the School of Computing, Informatics, and Decision Systems Engineering.

Usually when people think of software security, they think of finding bugs. But Tiffany Bao is interested in all of the decisions that come after the moment of discovery. For example, when should the cybersecurity professional report it to the security software vendor? When is the best time to patch the bug? Does it even need to be patched?

Bao is an assistant professor in the School of Computing, Informatics, and Decision Systems Engineering. She researches how to address software vulnerabilities by combining artificial intelligence and game theory, a method that aims to find the best solution at the lowest cost.

All actions in cybersecurity have a cost — even well-intentioned measures to prevent harm. For example, a patch may cause other issues in the system.

“It’s like when you shop for a big piece of furniture. You really love it, you know that it will make your house pretty, but it’s huge. So then you need to think about whether you really want to buy it,” Bao said. “If you buy it, it comes with costs. You need to pay for it, maybe you will have to hire a mover or adjust your other furniture. Game theory gives you this nice way to model the costs and benefits.”

Creating a model to find the optimal strategy, however, can take expensive time and effort. This is where artificial intelligence comes in — it can give approximate results that reveal what is most likely to lead to the optimal strategy. A system’s ability to search out its own bugs and suggest solutions is called cyberautonomy.

It will be a little while before cyberautonomous systems are deployed; right now they’re still in research and development. However, Bao and other researchers are motivated to understand and implement them.

“If there’s a system that can find bugs and also make intelligent decisions based on the situation, then that would definitely make the network and the computer more secure,” she said.

Currently, it’s up to humans to do the work of scanning for bugs and figuring out a course of action.

“People try to make the best decision, but sometimes they just don’t know what the best decision is,” Bao said. Cyberautonomous systems can better compute the ideal decision, which they can then present to humans who make the final call on what to do.

Game theory can also provide valuable information on issues surrounding cybersecurity efforts, such as the ideal cyberdefense budget. For example, game theory models can help predict the outcomes with a $10 million vs. a $20 million budget.

“In recent work, we found it’s not always the case that the more budget we allocate, the better,” Bao said. “You don’t want to spend too much money, because the outcome is not going to be as good as spending less money.”

Bao sees the cybersecurity arena moving toward using cyberautonomous systems in the future.

“The world is becoming more complicated,” she said. “We definitely need computers to help us gain a more comprehensive understanding and make good decisions.”

Avoiding legal tech lag

Diana Bowman is a professor in the Sandra Day O’Connor College of Law and the School for the Future of Innovation in Society.

“I find that everything in the Global Security Initiative has a legal, ethical or policy dimension to it,” said Diana Bowman, who studies the international governance of emerging technologies. Bowman is a professor in the Sandra Day O’Connor College of Law and the School for the Future of Innovation in Society, and she frequently collaborates with the Global Security Initiative on her research.

Although international governance includes things like laws and treaties, much of what Bowman focuses on is what she calls soft law — nonlegally binding methods that sway how parties act by encouraging certain behavior. The World Economic Forum and the Organisation for Economic Co-operation and Development are two examples of powerful soft law influencers.

Soft law becomes critical in regulating emerging technology because traditional law can’t keep up with technological development.

“When we’re talking about having legislation passed or a new treaty, it can literally take years of negotiation. By the time you actually get an agreement, the technology has moved on in many different ways. In many cases it’s not really the most effective way to regulate a technology or its applications,” Bowman said.

On the other hand, soft law is more agile. It’s easier to have a standard-setting organization encourage companies to self-regulate, creating a kind of quick governance. Even so, there’s still a lag with soft law, and enforcement is trickier, relying more on incentives than consequences. As such, not everyone is a big proponent of soft law as a governance tool.

The reason that traditional and soft law both fall behind technology to some degree is that technological evolution is faster than ever before.

“Also, a lot of platform technologies have many different applications in various different realms. It’s a lot harder to keep up when the trajectory of a technology is less certain than what we would have seen four decades ago,” Bowman said.

She cites nanotechnology as an example. When it was first introduced, it was hard to imagine all of its possible applications. Now, it’s used in everything from cosmetics to airplanes and involves many different regulatory agencies.

Technology law is also tricky because it has to encompass not only governments and federal agencies, but also privately owned companies, including multinationals. But despite the difficulty, it’s becoming increasingly important. Technology is deeply embedded in our critical infrastructure, and the Colonial Pipeline hack highlights how infrastructure is vulnerable in both cybersecurity and legal measures. Bowman believes that targeting critical infrastructure will be a powerful tool for adversaries in the future.

“It’s a lot easier for a foreign nation to pay 10,000 hackers to target critical infrastructure than to build bombs and deploy them,” she said. “The type of attacks that we’ve seen will continue, and national interest suggests that we do have to create solutions that are far more proactive and agile.”

Diplomacy will have a key role going forward, since a cyberattack could come from outside the U.S., limiting our ability to bring someone to justice. In general, a better understanding of how to influence behavior around emerging technology, such as international trade agreements, will be important.

“A lot of people think of governance only in terms of the ability to bring somebody to court, but there are many different ways that you could encourage or discourage behavior,” Bowman said. “That’s what we really are talking about when talking about governance.”

The challenge of cybersecurity is too complex for one person — or even one academic field — to have the knowledge necessary to solve it. That’s why the Global Security Initiative brings together expert minds from across ASU’s research community; a problem that affects so many also requires many perspectives to understand it.

“This kind of research is going to be valuable whether you’re talking about the laptop on your desk or large-scale industrial control systems like we see in infrastructure,” Winterton said.

No Comments Yet

Leave a Reply

Your email address will not be published.

©2024 Global Cyber Security Report. Use Our Intel. All Rights Reserved. Washington, D.C.