Jon Green is VP and Chief Security Technologist at Aruba, a Hewlett Packard Enterprise Company. He is responsible for providing technology guidance and leadership for all security solutions including authentication and network access control, UEBA, encryption, firewall, and VPN. He also manages Aruba’s Product Security Incident Response Team (PSIRT) and Aruba Threat Labs, an internal security research group. Green joined Aruba in 2003 and helped it grow from a small startup to today’s position as a leading provider of network mobility solutions. Prior to Aruba, he held product management, marketing, and sales positions with Foundry Networks, Atrica, Nortel Networks and Bay Networks. Green holds a B.S. in Information Security from Western Governor’s University and a M.S. in Computer science/concentration Information Security from James Madison University.
In an exclusive interview with Augustin Kurian, Sr. Feature Writer at CISO MAG, Green talks about his journey that led to Aruba, IoT security and examination of data, myths surrounding Zero Trust, and the future of AI and ML in cybersecurity.
You started your career as a technical engineer with Nortel Networks. You have donned the hats of both technology and marketing leaders. Tell us a bit about yourself and the journey that led to Aruba. How do you think Enterprise security has evolved during this time?
I got my start even before Nortel, doing tech support for a dialup ISP in the very early days of the commercial Internet. Later, I went to work for Bay Networks, which was bought by Nortel. That was where I first worked for Keerti Melkote, who had also joined Nortel by way of acquisition. A few years later after Keerti founded Aruba, I called him up wondering what Aruba was and what I could do for him. At that point he was still hiring QA engineers and he asked if I wanted to do that. In hindsight, I should have said yes, but I waited another six months and then joined Aruba as a combination product manager and technical marketing engineer; in early startup days you do whatever needs to be done and wear as many hats as necessary. In many ways, 16 years later, that’s still true even as part of a Hewlett Packard Enterprise (HPE)–I took an interest in security, found a lot of work that needed to be done and started doing it.
In many ways, security hasn’t changed much at a technical level since the early days of computers. What has changed is the business and personal impact. We’ve always had IT security vulnerabilities, and we’ve always had bad actors seeking to use those vulnerabilities for their own purposes. In 2020, we still see the same sorts of technical weaknesses– poor programming, misconfiguration and a lack of effective threat modeling. What HAS changed over the years has been the impact. In 1995, the financial rewards for computer crime were relatively low. Today, that is not the case, and as a result, you now see professional attackers who are being paid to do what they do. Today’s practical scope of security then is tied to authentication, identity, creation of policy and the monitoring of bad behavior.
A big part of IoT security revolves around a close examination of data (from sensors). What are the primary methods you recommend for examining the data generated by IoT devices for security purposes?
This is one of those “it depends” answers that security people are so fond of. As IoT device growth continues to proliferate, the emergence of sensors and other connected things has jumped dramatically–we’re seeing something like 14 million IoT devices entering the network daily. If we’re not paying attention, this creates a massive problem in the form of large blind spots for organizations that are essentially growing the attack surface, that they also need to manage from a security standpoint. The question then is, what does “manage” mean? Are we worried about the data, or the device? It depends.
From a security standpoint, we do need to worry about ensuring that the devices themselves do not compromise enterprise security; the nowfamous “Wi-Fi fish tank thermostat” has become the classic example for that. The primary control method here is network segmentation down to the most granular level you can get to—a single device if your infrastructure supports it. In a nutshell, we want to make sure the device can get to its approved services and nothing else. The data generated by the devices could be a different story depending on the impact of a security breach. If I’m running uranium centrifuges (ala Stuxnet) then my sensor data matters a great deal to my mission. If it’s a Wi-Fi fish tank thermostat and I’ve got some inexpensive goldfish, then maybe not. There are some security products out there which are specifically focused on industrial control systems and ensuring the data flowing through those systems is correct, but in general these systems are going to be very application specific.
How does the corporate IT team balance the needs of security vs. connectivity, especially when IoT devices can often be brought onto the network outside the purview of the IT organization?
They shouldn’t necessarily have to–they can do both. From the start, when Aruba was designing first generation products back in 2002, we were building WLANs and we considered security as a critical requirement–the idea that devices or users could not be inherently trusted, and the idea that Wi-Fi would be a multiuser, multi-access network that needed to support different classes of users and devices on the same network. We needed policies for all users and devices, and then ways to enforce them such as a user-facing firewall. Today we provide that capability for Wi-Fi, wired networks, remote access, and SD-WAN. To a great extent, it’s OK if people plug in IoT devices without talking to IT, as long as those devices are prevented from doing damage to the enterprise.
What can be the best approach to secure connectivity to the cloud and how can enterprises harness IoT while keeping the network and sensitive corporate assets safe?
Knowing where your sensitive data is now, and where it is going to be in the future, is the number one rule of security for the cloud. It needs to be protected through best practices like encryption. Be it in the cloud, or the edge network, businesses are ultimately responsible for any loss of data that impacts customers. Additionally, organizations need to ensure they are thoroughly vetting any edge device and applying the right level of access control to the device and encryption toward the data being generated that is eventually flowing into the cloud. This means that organizations of all sizes need to develop processes to apply security rigor and thoroughly test any device or set of devices being used to support business functions.
Aruba’s single switching platform runs on a network operating system from the enterprise edge to the core to the data center. Tell us a bit about this new technology. What makes it so unique and innovative?
We, like other players in the networking industry, have years and years of legacy technology behind us. I don’t mean “legacy” in a negative way—there’s tremendous value in having deep expertise in the field going back decades. However, our customers are looking for some consistency in the technology they deploy, largely because a smaller number of people are having to support larger and larger networks. Having a single operating system and consistent set of silicon across an entire edge-todata-center network provides huge advantages when you consider both network management and policy enforcement, because suddenly you are managing and monitoring an entire network as one single entity. This takes away one of the biggest security challenges, ensuring that all the assets are not only accounted for, but have the correct policy settings to control users, devices and, in turn, be able to discover anomalous behavior that may be the precursor to a costly hack.
How do you define Zero Trust Networking? It is difficult or impossible to adapt legacy infrastructure for a zero trust world. You’ve said that things like IoT don’t fit in a zerotrust world and are more towards network access control.
Zero Trust (ZT) in a nutshell means your access to an IT resource does not hinge on your mere presence on a network. It doesn’t give you access you would not have otherwise. In other words, just being there does not mean being authenticated. It’s the direct opposite of the “fortress mentality” where everything outside the perimeter is bad and everything inside the perimeter is good.
ZT security is an important concept being applied to a lot of things over the past several years, but the most important thing to think about is that ZT is not a destination that you magically arrive at with new equipment; it’s a journey. It’s not a product you buy; it’s a design philosophy.
My comments around IoT and ZT have reflected the fact that IoT devices often come with minimal security capabilities. They don’t have secure credentials, can’t authenticate with anything and may not even support encryption. My point is, you can plan for a ZT framework but you’re always going to have pieces that don’t play in that world—so plan for that. Have a way to connect those devices and let them get their job done without letting them be a risk to the rest of the enterprise.
What are the myths around zero trust? You have often been quoted stating that users don’t always understand what a Zero Trust approach entails or how to necessarily move in that direction–do you think security vendors are confusing that perspective?
In a word, “yes.” ZT is overhyped because vendors are overusing it, which happens with many terms in the industry—how many vendors have used “AI” or “blockchain” to sell you legacy technology with a new name? The problem is that hype confuses the marketplace. User identity needs to be tied to strong credentials, and the use of two-factor authentication. You cannot just trust a device. Access must be the same inside or outside the building. Furthermore, there needs to be a monitoring step, so after authentication, events are also logged.
Overall, ZT is an aspirational journey that brings together many elements to create a single concept (or rule to live by). But, be very careful of vendors who tell you that you can buy ZT from them. They might have products that can help move you toward a ZT framework, but nobody can provide the entire solution.
Can zero trust and network access control co-exist today? Or will zero trust make Network Access Control obsolete?
Depending on how you look at the definition of ZT, the two technologies are either simply two parts of the same spectrum, or at the very least are complementary technologies. Network Access Control (NAC) is focused on authenticating and controlling access to enterprise networks, while in most people’s definition, ZT is about controlling access to applications. I tend to think of NAC as being “zero trust for networks” — it’s saying that just because you can connect to a Wi-Fi network or a wired port, that doesn’t let you have wideopen access to all services. Access is based on identity and context. ZT, using the popular definition, is doing the same thing, but the object being accessed is an application instead of a network.
One of the more famous instantiations of ZT actually uses NAC as a context source — a device that is authenticated to a corporate network provides information to the access proxy that devices on external networks can’t provide. So, I don’t see a conflict here — NAC is going to be around for a long time, even as ZT frameworks come into prominence and the technologies will easily co-exist.
Lastly, AI and ML have been driving innovation in the space of cybersecurity. How can enterprises leverage technologies like AI/ML, or other tools to further automate IoT and network security?
As security becomes more complex, and organizations continue to experience skills and resources gaps, technologies like AI/ML are going to become increasingly important to ensure security scales with the business infrastructure. To start with, ML is simply a sub-category of AI–and most of the work in security today is using ML so that’s the term I will use. We would typically contrast ML with “rules.” I want to discover if a certain risk exists, or if a certain event has taken place, or if a series of events has taken place — any of which could indicate a security problem. Sometimes I can do this with rules, but very often the number and complexity of the rules can become overwhelming for a human and that’s where ML can potentially step in. ML can look at huge amounts of data, pull out very weak signals and classify that data or, at the very least, identify things that are interesting.
I’ll give a practical example: Most organizations today are regularly running vulnerability scanners on their networks. These scanners are great for finding misconfigured systems and common weaknesses, but they are also notorious for throwing off piles of false positives. Unfortunately, those false positives end up in my mailbox often as our customers demand to know why their Aruba equipment is vulnerable to a 20-year-old CVE that was found in a long-dead IBM database product (answer:It’s not). This is a place where a properly trained ML engine could look at the results, consider the number of times and the circumstances under which vulnerability findings were real versus false and then weed out the false positives. We’re actually working on something like this right now, first as an internal tool but possibly something we’ll release externally in the future.
Just as with other industry buzzwords, buyers beware with AI and ML! If a vendor wants to sell you an AI-enabled security product, ask carefully what the product does using AI that it could not do without AI. That one question will help you separate those who are really using the technology from those who are jumping on the buzzword bandwagon.
This interview first appeared in the April 2020 issue of CISO MAG. Subscribe now!
About the Interviewer
Augustin Kurian is part of the editorial team at CISO MAG and writes interviews and features.