DC/OS can map traffic from a single Virtual IP (VIP) to multiple IP addresses and ports. DC/OS VIPs are name-based, which means clients connect with a service address instead of an IP address.app
DC/OS automatically generates name-based VIPs that do not collide with IP VIPs, so you don’t have to worry about collisions. This feature allows name-based VIPs to be created automatically when the service is installed.dom
A named VIP contains these components:ide
You can assign a VIP to your application from the DC/OS GUI. The values you enter when you deploy a new service are translated into these Marathon application definition entries:ui
portDefinitions
if not using Docker containersportMappings
if using Docker containersVIPs follow this naming convention:this
<service-name>.marathon.l4lb.thisdcos.directory:<port>
Copyspa
From the Networking tab, select NETWORK TYPE > Virtual Network: dcos.code
Expand ADD SERVICE ENDPOINT and provide responses for:component
As you fill in these fields, the service addresses that Marathon sets up will appear at the bottom of the screen. You can assign multiple VIPs to your app by clicking ADD SERVICE ENDPOINT.orm
In the example above, clients can access the service at my-service.marathon.l4lb.thisdcos.directory:5555
.dns
Click REVIEW & RUN and RUN SERVICE.
You can click on the Networking tab to view networking details for your service.
For more information on port configuration, see the Marathon ports documentation.
Some DC/OS services, for example Kafka, automatically create VIPs when you install them. The naming convention is: broker.<service.name>.l4lb.thisdcos.directory:9092
.
Follow these steps to view the VIP for Kafka.
Prerequisite: The Kafka service and CLI must be installed.
Run this command:
dcos kafka endpoints broker
Copy
The output should resemble:
{ "address": [ "10.0.2.199:9918" ], "zookeeper": "master.mesos:2181/dcos-service-kafka", "dns": [ "broker-0.kafka.mesos:9918" ], "vip": "broker.kafka.l4lb.thisdcos.directory:9092" }
Copy
You can use this VIP to address any one of the Kafka brokers in the cluster.
This behavior is often experienced with applications that have long lived connections, such as databases (e.g. PostgreSQL). To fix, try turning on keepalives. The keepalive can be an application specific mechanism like a heartbeat, or something in the protocol like a TCP keepalive. A keepalive is required because a load balancer cannot differentiate between idle or dead connections as no packets are sent in either case. The default timeout depends on the kernel configuration, but is usually 5 minutes.