Toward Intelligent Autonomous Agents for Cyber Defense: Report of the 2017 Workshop by the North Atlantic Treaty Organization (NATO) Research Group IST-152-RTG

Reading time: 5 minute
...

📝 Original Info

  • Title: Toward Intelligent Autonomous Agents for Cyber Defense: Report of the 2017 Workshop by the North Atlantic Treaty Organization (NATO) Research Group IST-152-RTG
  • ArXiv ID: 1804.07646
  • Date: 2018-02-15
  • Authors: Ryan Thomas; Martin Drašar —

📝 Abstract

This report summarizes the discussions and findings of the Workshop on Intelligent Autonomous Agents for Cyber Defence and Resilience organized by the NATO research group IST-152-RTG. The workshop was held in Prague, Czech Republic, on 18-20 October 2017. There is a growing recognition that future cyber defense should involve extensive use of partially autonomous agents that actively patrol the friendly network, and detect and react to hostile activities rapidly (far faster than human reaction time), before the hostile malware is able to inflict major damage, evade friendly agents, or destroy friendly agents. This requires cyber-defense agents with a significant degree of intelligence, autonomy, self-learning, and adaptability. The report focuses on the following questions: In what computing and tactical environments would such an agent operate? What data would be available for the agent to observe or ingest? What actions would the agent be able to take? How would such an agent plan a complex course of actions? Would the agent learn from its experiences, and how? How would the agent collaborate with humans? How can we ensure that the agent will not take undesirable destructive actions? Is it possible to help envision such an agent with a simple example?

💡 Deep Analysis

Figure 1

📄 Full Content

This report summarizes the discussions and findings of the Workshop on Intelligent Autonomous Agents for Cyber Defence and Resilience organized by the North Atlantic Treaty Organization (NATO) research group IST-152-RTG. The workshop was held in Prague, Czech Republic, on 18-20 October 2017, at the premises of the Czech Technical University in Prague. The workshop was unclassified, releasable to public, and open to representatives of NATO Partnership for Peace (PfP)/Euro-Atlantic Partnership Council (EAPC) nations. The workshop was chaired by program co-chairs Prof Michal Pechoucek, Czech Technical University, Prague, Czech Republic, and Dr Alexander Kott, US Army Research Laboratory, United States. This workshop explored opportunities in the area of future intelligent autonomous agents in cyber operations. Such agents may potentially serve as fundamental game-changers in the way cyber defense and offense are conducted. Their autonomous reasoning and cyber actions for prevention, detection, and active response to cyber threats may become critical enablers for the field of cybersecurity. Cyber weapons (malware) are rapidly growing in their sophistication and their ability to act autonomously and adapt to specific conditions encountered in a friendly system/network. Current practices of cyber defense against advanced threats continue to be heavily reliant on largely manually driven analysis, detection, and defeat of such malware. There is a growing recognition that future cyber defense should involve extensive use of partially autonomous agents that actively patrol the friendly network, and detect and react to hostile activities rapidly (far faster than human reaction time), before the hostile malware is able to inflict major damage, evade friendly agents, or destroy friendly agents. This requires cyberdefense agents with a significant degree of intelligence, autonomy, self-learning, and adaptability. Autonomy, however, comes with difficult challenges of trust and control by humans.

The workshop investigated how the directions of current and future science and technology may impact and define potential breakthroughs in this field. The presentations and discussions at the workshop produced this report. It focuses on the following questions that the participants of the workshop saw as particularly important:

• In what computing and tactical environments would such an agent operate?

• What data would be available for the agent to observe or ingest?

• What actions would the agent be able to take?

• How would such an agent plan a complex course of actions?

• Would the agent learn from its experiences, and how?

• How would the agent collaborate with humans?

• How can we ensure that the agent will not take undesirable destructive actions?

• Is it possible to help envision such an agent with a simple example?

In addition to this report, the papers presented at the workshop were published as a separate volume, Intelligent Autonomous Agents for Cyber Defence and Resilience: Proceedings of the NATO Prague,Czech Republic,[18][19][20] October 2017, edited by Alexander Kott and Michal Pechoucek, which can be found online at http://ceur-ws.org/Vol-2057/ .

Authors: Ryan Thomas and Martin Drašar

With the proliferation of machine-learning (ML) methods in recent years, it is likely that autonomous agents will become commonplace in day-to-day military operations. We expect a significant boost in their capabilities owing to both algorithmic advancements and adoption of purpose-built ML hardware. However, the range of agents’ functions will still be, in the foreseeable future, limited by a number of environmental factors, which we attempt to enumerate.

In this section, we recognize 2 types of autonomous agents as 2 extremes on the capability scale. At one extreme are preprogrammed heuristic agents, responding only to specified stimuli based on a set of preset actions. At the other extreme are robust intelligent systems with advanced planning and learning characteristics.

Capability is then the aggregate of an agent’s intelligence, awareness, connectedness, control, distributedness, level of autonomy, and adaptability.

Environmental factors limit the specific functions and abilities of particular agents and the combination of these factors place an upper bound on agents’ capabilities.

The following sections provide a list of these factors and their impact.

Autonomous agents deployed at stationary structures (e.g., buildings or weapon systems) should suffer the fewest limitations in their operation, as it can reasonably be expected that such agents will have enough power, processing capacity, connectivity, and other resources needed to carry out the most complicated of tasks. These systems will be restricted mostly by the ML state of the art.

Agents deployed on mobile platforms (e.g., vehicles, Soldiers, or missiles) will inevitably be limited by intermittent connectivity; power, space, and processing constraints; o

📸 Image Gallery

cover.png

Reference

This content is AI-processed based on open access ArXiv data.

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut