Skip navigation
Cisco Tetration
A bicyclist rides by a sign that is posted in front of the Cisco Systems headquarters in San Jose, California. (Photo by Justin Sullivan/Getty Images)

Does Cisco's Data Center Analytics Update Truly Enable Zero-Trust?

Is Cisco’s most sophisticated network monitoring system to date capable of supporting the infosec community’s most desired trust model?

The latest release of Cisco Tetration, its data center analytics service for networked application performance, announced last week, completes a feature that more DevOps professionals have been requesting – a feature that Cisco touted when the service was first introduced last June: application segmentation.  It should enable DevOps to devise rules and policies for network traffic based solely upon the applications that generate it.

Cisco told reporters that its implementation of app segmentation fully enables the zero-trust model, which security engineers define as a policy enforcement regime that treats all traffic as untrusted unless a rule is enforced to explicitly allow it.

“The way one implements zero-trust is [to] assume all traffic is bad unless a policy states otherwise,” Yogesh Kaushik, Cisco’s senior director of product management, wrote in a note to Data Center Knowledge.

But professionals in the infosec and networking industries say Cisco’s implementation of machine learning algorithms -- as Kaushik described it -- may not be zero-trust as it’s generally known.  At issue is whether a baseline of trust, even if that baseline is generated by an algorithm, provides the reliable default level of skepticism that DevOps expects.

What’s the big deal about getting the definition right?  Cloud service providers, enterprise data centers, and public sector services at the municipal, state, and federal levels, are all studying zero-trust more seriously as a more effective methodology for locking down systems and protecting customer data.

When the cloud services market became bogged down with an overabundance of “services-as-a-service,” the US Commerce Dept.’s NIST weighed in, publishing specifications for SaaS, PaaS, and IaaS that the world now follows.  As important as personal data protection has already become, NIST, or an agency commanding equal respect, may be called upon to decide where trust ends and skepticism begins.

Baseline

The telemetry which populates Tetration’s data center analytics engine is acquired from multiple sources, including from APM-like agents inserted into workloads, as well as directly from Cisco’s Nexus 9000 switches, as we reported last June.  At a granular level, Tetration aims to determine which applications are responsible for what packets, and execute and enforce rules based on those determinations.

As Cisco’s Kaushik told us, “The first problem Tetration solved was looking at the current data center and apps and showing exactly what communication occurs.  We then use machine learning to identify patterns of behavior and a baseline.  The customers at this point can either (a) take the current pattern and implement a policy, so the same behavior persists in future (if it ain’t broke, don’t touch it); or (b) a better model:  Use the baseline for what it is: a baseline for current behavior, and start pruning edges and communications to see how it impacts applications.  So with few iterations, you can tighten the policy.”

Kaushik admitted that this second release of the data center analytics service is still limited to virtual machine-based workloads, as opposed to Docker or OCI containers on distributed or microservices systems.  That feature remains forthcoming, he said.

DevOps will be able to write policies for Tetration using a number of methods, he added, including through an open API that accesses its streaming analytics using the Apache Kafka model.  This way, developers or DevOps may write scripts or applications that address Tetration using languages such as Python and Scala.

The latest Tetration hardens the definition of what’s trustworthy and what’s not, enabling the individuals responsible for managing specific classes of traffic to make those determinations.  For example, he suggested, an information security professional may decide that a financial database must be inaccessible to all but authorized finance applications, or alternately, that all Windows Server instances that are without a specific security patch, should be treated as inoperative.  (Microsoft has implemented a similar feature in its own server OS since Windows Server 2008 R2.)

“The Tetration platform takes all this input, along with current behavioral data, and merges the policy to create a unified common trust model,” Kaushik went on.  “If someone changes one of the rules, the platform re-computes the complete trust model in real-time.  That’s what we push down to servers and infrastructure.  If a new workload pops up, we push the right policy based on its attributes, such as finance data or unpatched OS, etc., rather than ephemeral characteristics like IP address, etc.”

Zero-Trust with Added Trust?

“In the spirit of zero-trust (verify, but never trust),” Lori MacVittie, F5 Networks’ principal technical evangelist, wrote in a note to Data Center Knowledge, “accepting a set of exceptions generated on the basis of ‘normalcy’ — which implies frequency and not necessarily legitimacy — certainly seems to negate the purpose.  Doing so automatically pretty much guarantees violation, as there’s little ‘verify’ occurring.”

“My understanding of the concept of zero-trust,” stated Chet Wisniewski, senior security advisor for security services maker Sophos, “is literally what the words are.  You don’t trust anything.

“The antiquated concept that there’s bad guys on the outside and good things on the inside — and that there is such a thing as an ‘inside’ and an ‘outside’ — is a broken model,” Wisniewski continued.  “Zero-trust turns that model around, and says, ‘Just because Sally and Greg work for you, don’t trust that what they’re doing is safe.’  Because it may or may not be.”

Rather than whitelisting baseline behaviors, Wisniewski perceives zero-trust as accepting nothing as normal or natural behavior — not even what machine learning algorithms may detect.  “No traffic, by its existence or where it came from, is necessarily good traffic.”

That said, he acknowledges that methods of learning general traffic patterns — even algorithmic methods — may not only be useful but necessary.  This way, aberrations such as “Sally” accessing a boatload of records at 3:00 a.m., can be red-flagged.

“The model has to be fluid,” said Wisniewski.  “It’s not something that a human being can define, because Sally’s job may evolve over time, and she may have a role change.  There are a million different things that have to be fluid in an environment, and it’s really hard for models to evolve if they’re made by humans.  There needs to be an algorithmic angle to it.”

That actually sounds more and more like the model Cisco’s Kaushik described.  So maybe the problem is not with how Tetration actually works — just how the data center analytics service is being marketed.

“Zero-trust requires a legitimate reason for the exception other than, ‘It’s normal behavior,’ particularly when it comes to applications,” wrote F5’s MacVittie.  “Just because there’s regular (normal) communication between two apps or app components does not (or should not) imply it’s legitimate, especially if you’re trying to move an existing environment to a zero-trust model.”

TAGS: Cisco Security
Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish