The shortage of human cyber experts, vast cyber surface, and speed of machines are driving the application of artificial intelligence (AI) in cyberspace. Indeed, the national strategies of the US and China make it clear that future cyberspace operators will be augmented with autonomous agents. If an attacker is using AI to operate at machine speed, defense must occur at least as quickly to be effective. Research is already underway to develop more robust defensive agents that can hunt for and neutralize threats on their networks. Similar focus exists for the development of attack capabilities.
As a result of AI and the growing cyber surface, we can expect adversaries to use autonomous actors who can carry out attacks, at varied rates, against sets of potential targets. Similarly, defenders will employ cognitive agents to counter attacks on their networks at machine speed over 24/7 operations. One can imagine autonomous cyber hunt agents that select the right machine learning (ML) modules for specific times and contexts, reason over the information they provide, and collaborate with their human teammates to eradicate threats. Preparing for this eventuality, SoarTech is leading efforts to employ AI in pen-testing, cyber training, and network defense. SoarTech’s Cyber Cognitive Attacker (CyCog-A) is a synthetic, offensive, cognitive agent that emulates real attackers by modeling the complex thoughts, decision-making, and contextual understanding of a human operator. Its goal-seeking behavior results in a wide range of realistic attacks, like phishing, remote exploitation, and SQL injection. CyCog-A can also scan for hosts, services and vulnerabilities, perform lateral movement inside a breached network, and exfiltrate files of interest.
CyCog-A is built upon the Soar cognitive architecture. The architecture is a co-symbolic production system capable of symbolic (e.g., rules, productions, etc.), episodic (e.g., temporal memory), and non-symbolic reinforcement learning. Non-symbolic ML contributes fast and scalable classification, pattern matching and prediction, while symbolic AI drives the integration and sense-making of the resulting information. Over multiple experiences or episodes, the agent can learn from its experiences by applying successful actions taken in previous encounters while condensing these steps into a shorter, more effective, and efficient chains. Alternatively, a Soar agent may learn effective policies through tuning its operator preferences over a series of decision-making steps. The CyCog agent’s actions include updating its own internal mental model or executing a wide variety of command line tools to execute a specific cyber-action (e.g. exploit, install payload, etc.). The time to execute a decision cycle is typically less than the 50ms hypothesized for a human decision-cycle. This makes the agent reactive to changes in the external environment.
This presentation will cover CyCog-A’s use of AI to pen-test systems and the methods available for AI to defend cyberspace. Topics include how CyCog-A can make sense of complex information, collaborate with human teammates, emulate real attackers, and perform cyber operations at machine speed. An overview of future work for the CyCog architecture will also be offered.