Zero trust has become the dominant framework for enterprise security architecture — and for good reason. The principle of "never trust, always verify" addresses the fundamental problem with perimeter-based security: once an attacker is inside the perimeter, they inherit the trust that internal network position conveys.
But organizations often try to implement zero trust backwards. They deploy identity verification, enforce device posture checks, and implement microsegmentation policies — before they have any clear picture of what's actually communicating on their network. The result is policy built on assumptions, exceptions driven by broken workflows, and a zero trust architecture that's more a compliance artifact than an operational security control.
The right starting point is visibility.
What You Need to Know Before You Can Enforce
Zero trust policy enforcement requires answers to a set of foundational questions:
- What assets exist on the network? You can't define policy for systems you don't know about.
- What are those assets doing? Which systems communicate with which, over what protocols, with what frequency?
- Which communication is legitimate? Before you can define what to block, you need to know what to permit.
- What changed recently? New systems, new communication patterns, and decommissioned assets all require policy updates.
Without answers to these questions, zero trust policies are guesses. And in enterprise environments, wrong guesses break applications — creating the emergency exceptions and broad allow rules that undermine the architecture's security value.
Visibility as the Foundation Layer
The NIST Zero Trust Architecture (SP 800-207) explicitly identifies "continuous monitoring and validation" as a core component. That monitoring requires telemetry — and the most comprehensive, infrastructure-independent telemetry source available for network behavior is flow data.
Flow data answers the foundational questions:
Asset discovery. Every device that communicates on the network appears in the flow data. New devices are detected as soon as they generate traffic. Devices that have gone dark are identified by their absence.
Behavior mapping. The flow record for any given asset shows its complete communication profile: every system it talks to, every protocol it uses, the typical volumes and timing of its conversations.
Legitimate communication baselines. Over a 2-4 week observation period, the flow data for a segment reveals the full set of communication relationships that are required for normal operation. These become the foundation of the allow list.
Change detection. When a system starts communicating in a new way — a new destination, a new protocol, a new volume pattern — the flow data captures it. In a zero trust context, unexplained communication changes are significant events.
The Zero Trust Implementation Sequence
The practical sequence for implementing zero trust with visibility as the foundation:
Phase 1: Discover and Map
Deploy flow collection across your environment — on-premises routers and switches, cloud VPC/VNet flow logs, virtual infrastructure. Let the collection run for 2-4 weeks without enforcing any new policies.
During this phase, build the asset inventory and the communication map. Identify every system, classify it, and document its observed communication patterns. Particular attention should go to:
- Systems with unexpected communication patterns
- Systems that talk to external destinations without clear business justification
- Systems with implicit trust relationships (e.g., broad internal access based on subnet membership) that will need explicit policy as zero trust is enforced
Phase 2: Define and Validate
Using the communication map from Phase 1, define the desired policy state: what communication should be permitted, what should be denied, and what should be challenged by additional verification (identity, device posture).
Run the policy in audit mode — log what would be blocked without actually blocking it. This validation phase typically runs for 2-4 weeks and reveals policy gaps that would break workflows in enforcement mode.
Phase 3: Enforce and Monitor
Move from audit mode to enforcement mode, segment by segment. Start with lower-risk segments (test, development) and work toward higher-stakes production environments.
Critically: enforcement mode doesn't end the visibility requirement. It increases it. In enforcement mode, blocked connections are evidence — either of policy gaps or of attempted lateral movement. Continuous flow monitoring distinguishes between the two.
Common Visibility Gaps That Derail Zero Trust
Shadow assets. Systems that aren't in the CMDB but appear in flow data. These can't be assigned policy in a zero trust framework if they're unknown. Flow data surfaces them.
Undocumented service accounts. Applications frequently use service accounts for internal API calls and database connections. These communication paths rarely appear in application documentation but show up clearly in flow data.
Infrastructure dependencies. Applications depend on DNS, NTP, certificate infrastructure, and patch management systems in ways that architects don't always capture. Zero trust policies that block these paths cause hard-to-diagnose failures. Flow data reveals them before enforcement.
Periodic processes. Quarterly jobs, annual audits, and infrequent backup processes may not appear in a 2-week observation window. Organizations that don't account for these discover them the hard way when enforcement breaks them.
FlowSight provides the continuous visibility layer that makes zero trust implementation tractable. It discovers assets automatically, maps communication relationships from observed flow data, and continues monitoring after policy enforcement begins — surfacing the deviations from expected behavior that zero trust architecture exists to detect.
Visibility Is Not the Destination
Visibility is the prerequisite, not the goal. The goal is an operating environment where implicit trust is eliminated — where every communication relationship is explicit, every access decision is verified, and every deviation from expected behavior is detected.
But you can't get there by guessing at policy. You get there by knowing your environment first — and that knowledge comes from the network itself.