L2TPNS Manual

  1. Overview
  2. Installation
    1. Requirements
    2. Compile
    3. Install
    4. Running
  3. Configuration
    1. startup-config
    2. users
    3. ip_pool
    4. build-garden
  4. Controlling the Process
    1. Command-Line Interface
    2. nsctl
    3. Signals
  5. Throttling
  6. Interception
  7. Authentication
  8. Plugins
  9. Walled Garden
  10. Clustering
  11. Routing
  12. Performance

Overview

l2tpns is half of a complete L2TP implementation. It supports only the LNS side of the connection.

L2TP (Layer 2 Tunneling Protocol) is designed to allow any layer 2 protocol (e.g. Ethernet, PPP) to be tunneled over an IP connection. l2tpns implements PPP over L2TP only.

There are a couple of other L2TP imlementations, of which l2tpd is probably the most popular. l2tpd also will handle being either end of a tunnel, and is a lot more configurable than l2tpns. However, due to the way it works, it is nowhere near as scalable.

l2tpns uses the TUN/TAP interface provided by the Linux kernel to receive and send packets. Using some packet manipulation it doesn't require a single interface per connection, as l2tpd does.

This allows it to scale extremely well to very high loads and very high numbers of connections.

It also has a plugin architecture which allows custom code to be run during processing. An example of this is in the walled garden module included.


Documentation is not my best skill. If you find any problems with this document, or if you wish to contribute, please email the mailing list.

Installation

Requirements

  1. Linux kernel version 2.4 or above, with the Tun/Tap interface either compiled in, or as a module.
  2. libcli 1.8.0 or greater.
    You can get this from http://sourceforge.net/projects/libcli

Compile

You can generally get away with just running make from the source directory. This will compile the daemon, associated tools and any modules shipped with the distribution.

Install

After you have successfully compiled everything, run make install to install it. By default, the binaries are installed into /usr/sbin, the configuration into /etc/l2tpns, and the modules into /usr/lib/l2tpns.

You will definately need to edit the configuration files before you start. See the Configuration section for more information.

Running

You only need to run /usr/sbin/l2tpns as root to start it. It does not detach to a daemon process, so you should perhaps run it from init.

By default there is no log destination set, so all log messages will go to stdout.

Configuration

All configuration of the software is done from the files installed into /etc/l2tpns.

startup-config

This is the main configuration file for l2tpns. The format of the file is a list of commands that can be run through the command-line interface. This file can also be written directly by the l2tpns process if a user runs the write memory command, so any comments will be lost. However if your policy is not to write the config by the program, then feel free to comment the file with a # or ! at the beginning of the line.

A list of the possible configuration directives follows. Each of these should be set by a line like:

set configstring "value"
set ipaddress 192.168.1.1
set boolean true

users

Usernames and passwords for the command-line interface are stored in this file. The format is username:password where password may either by plain text, an MD5 digest (prefixed by $1salt$) or a DES password, distinguished from plain text by the prefix {crypt}.

The username enable has a special meaning and is used to set the enable password.

Note: If this file doesn't exist, then anyone who can get to port 23 will be allowed access without a username / password.

ip_pool

This file is used to configure the IP address pool which user addresses are assigned from. This file should contain either an IP address or a CIDR network per line. e.g.:

    192.168.1.1
    192.168.1.2
    192.168.1.3
    192.168.4.0/24
    172.16.0.0/16
    10.0.0.0/8
Keep in mind that l2tpns can only handle 65535 connections per process, so don't put more than 65535 IP addresses in the configuration file. They will be wasted.

build-garden

The garden plugin on startup creates a NAT table called "garden" then sources the build-garden script to populate that table. All packets from gardened users will be sent through this table. Example:
    iptables -t nat -A garden -p tcp -m tcp --dport 25 -j DNAT --to 192.168.1.1
    iptables -t nat -A garden -p udp -m udp --dport 53 -j DNAT --to 192.168.1.1
    iptables -t nat -A garden -p tcp -m tcp --dport 53 -j DNAT --to 192.168.1.1
    iptables -t nat -A garden -p tcp -m tcp --dport 80 -j DNAT --to 192.168.1.1
    iptables -t nat -A garden -p tcp -m tcp --dport 110 -j DNAT --to 192.168.1.1
    iptables -t nat -A garden -p tcp -m tcp --dport 443 -j DNAT --to 192.168.1.1
    iptables -t nat -A garden -p icmp -m icmp --icmp-type echo-request -j DNAT --to 192.168.1.1
    iptables -t nat -A garden -p icmp -j ACCEPT
    iptables -t nat -A garden -j DROP

Controlling the Process

A running l2tpns process can be controlled in a number of ways. The primary method of control is by the Command-Line Interface (CLI).

You can also remotely send commands to modules via the nsctl client provided. This currently only works with the walled garden module, but modification is trivial to support other modules.

Also, there are a number of signals that l2tpns understands and takes action when it receives them.

Command-Line Interface

You can access the command line interface by telnet'ing to port 23. There is no IP address restriction, so it's a good idea to firewall this port off from anyone who doesn't need access to it. See users for information on restricting access based on a username and password.

The CLI gives you real-time control over almost everything in the process. The interface is designed to look like a Cisco device, and supports things like command history, line editing and context sensitive help. This is provided by linking with the libcli library. Some general documentation of the interface is here.

After you have connected to the telnet port (and perhaps logged in), you will be presented with a hostname> prompt.

Enter help to get a list of possible commands. A brief overview of the more important commands follows:

nsctl

nsctl was implemented (badly) to allow messages to be passed to modules.

You must pass at least 2 parameters: host and command. The host is the address of the l2tpns server which you want to send the message to.

Command can currently be either garden or ungarden. With both of these commands, you must give a session ID as the 3rd parameter. This will activate or deactivate the walled garden for a session temporarily.

Signals

While the process is running, you can send it a few different signals, using the kill command.
killall -HUP l2tpns
The signals understood are:

Throttling

l2tpns contains support for slowing down user sessions to whatever speed you desire. You must first enable the global setting throttle_speed before this will be activated.

If you wish a session to be throttled permanently, you should set the Vendor-Specific radius value Cisco-Avpair="throttle=yes", which will be handled by the autothrottle module.

Otherwise, you can enable and disable throttling an active session using the throttle CLI command.

Interception

You may have to deal with legal requirements to be able to intercept a user's traffic at any time. l2tpns allows you to begin and end interception on the fly, as well as at authentication time.

When a user is being intercepted, a copy of every packet they send and receive will be sent wrapped in a UDP packet to the IP address and port set in the snoop_host and snoop_port configuration variables.

The UDP packet contains just the raw IP frame, with no extra headers.

To enable interception on a connected user, use the snoop username and no snoop username CLI commands. These will enable interception immediately.

If you wish the user to be intercepted whenever they reconnect, you will need to modify the radius response to include the Vendor-Specific value Cisco-Avpair="intercept=yes". For this feature to be enabled, you need to have the autosnoop module loaded.

Authentication

Whenever a session connects, it is not fully set up until authentication is completed. The remote end must send a PPP CHAP or PPP PAP authentication request to l2tpns.

This request is sent to the radius server, which will hopefully respond with Auth-Accept or Auth-Reject.

If Auth-Accept is received, the session is set up and an IP address is assigned. The radius server can include a Framed-IP-Address field in the reply, and that address will be assigned to the client. It can also include specific DNS servers, and a Framed-Route if that is required.

If Auth-Reject is received, then the client is sent a PPP AUTHNAK packet, at which point they should disconnect. The exception to this is when the walled garden module is loaded, in which case the user still receives the PPP AUTHACK, but their session is flagged as being a garden'd user, and they should not receive any service.

The radius reply can also contain a Vendor-Specific attribute called Cisco-Avpair. This field is a freeform text field that most Cisco devices understand to contain configuration instructions for the session. In the case of l2tpns it is expected to be of the form

key=value,key2=value2,key3=value3,keyn=value
Each key-value pair is separated and passed to any modules loaded. The autosnoop and autothrottle understand the keys intercept and throttle respectively. For example, to have a user who is to be throttled and intercepted, the Cisco-Avpair value should contain:
intercept=yes,throttle=yes

Plugins

So as to make l2tpns as flexible as possible (I know the core code is pretty difficult to understand), it includes a plugin API, which you can use to hook into certain events.

There are a few example modules included - autosnoop, autothrottle and garden.

When an event happens that has a hook, l2tpns looks for a predefined function name in every loaded module, and runs them in the order the modules were loaded.

The function should return PLUGIN_RET_OK if it is all OK. If it returns PLUGIN_RET_STOP, then it is assumed to have worked, but that no further modules should be run for this event.

A return of PLUGIN_RET_ERROR means that this module failed, and no further processing should be done for this event. Use this with care. Every event function called takes a specific structure named param_event, which varies in content with each event. The function name for each event will be plugin_event, so for the event timer, the function declaration should look like:

int plugin_timer(struct param_timer *data);
A list of the available events follows, with a list of all the fields in the supplied structure:
EventDescriptionParameters
pre_auth This is called after a radius response has been received, but before it has been processed by the code. This will allow you to modify the response in some way.
  • t - Tunnel ID
  • s - Session ID
  • username
  • password
  • protocol (0xC023 for PAP, 0xC223 for CHAP)
  • continue_auth - Set to 0 to stop processing authentication modules
post_auth This is called after a radius response has been received, and the basic checks have been performed. This is what the garden module uses to force authentication to be accepted.
  • t - Tunnel ID
  • s - Session ID
  • username
  • auth_allowed - This is already set to true or false depending on whether authentication has been allowed so far. You can set this to 1 or 0 to force allow or disallow authentication
  • protocol (0xC023 for PAP, 0xC223 for CHAP)
packet_rx This is called whenever a session receives a packet. Use this sparingly, as this will seriously slow down the system.
  • t - Tunnel ID
  • s - Session ID
  • buf - The raw packet data
  • len - The length of buf
packet_tx This is called whenever a session sends a packet. Use this sparingly, as this will seriously slow down the system.
  • t - Tunnel ID
  • s - Session ID
  • buf - The raw packet data
  • len - The length of buf
timer This is run every second, no matter what is happening. This is called from a signal handler, so make sure anything you do is reentrant.
  • time_now - The current unix timestamp
new_session This is called after a session is fully set up. The session is now ready to handle traffic.
  • t - Tunnel ID
  • s - Session ID
kill_session This is called when a session is about to be shut down. This may be called multiple times for the same session.
  • t - Tunnel ID
  • s - Session ID
radius_response This is called whenever a radius response includes a Cisco-Avpair value. The value is split up into key=value pairs, and each is processed through all modules.
  • t - Tunnel ID
  • s - Session ID
  • key
  • value
control This is called in whenever a nsctl packet is received. This should handle the packet and form a response if required.
  • buf - The raw packet data
  • l - The raw packet data length
  • source_ip - Where the request came from
  • source_port - Where the request came from
  • response - Allocate a buffer and put your response in here
  • response_length - Length of response
  • send_response - true or false whether a response should be sent. If you set this to true, you must allocate a response buffer.
  • type - Type of request (see nsctl.c)
  • id - ID of request
  • data - I'm really not sure
  • data_length - Length of data

Walled Garden

Walled Garden is implemented so that you can provide perhaps limited service to sessions that incorrectly authenticate.

Whenever a session provides incorrect authentication, and the radius server responds with Auth-Reject, the walled garden module (if loaded) will force authentication to succeed, but set the flag garden in the session structure, and adds an iptables rule to the garden_users chain to force all packets for the session's IP address to traverse the garden chain.

This doesn't just work. To set this all up, you will to setup the garden nat table with the build-garden script with rules to limit user's traffic. For example, to force all traffic except DNS to be forwarded to 192.168.1.1, add these entries to your build-garden:

iptables -t nat -A garden -p tcp --dport ! 53 -j DNAT --to 192.168.1.1
iptables -t nat -A garden -p udp --dport ! 53 -j DNAT --to 192.168.1.1
l2tpns will add entries to the garden_users chain as appropriate.

You can check the amount of traffic being captured using the following command:

iptables -t nat -L garden -nvx

Clustering

An l2tpns cluster consists of of one* or more servers configured with the same configuration, notably the multicast cluster_address.

*A stand-alone server is simply a degraded cluster.

Initially servers come up as cluster slaves, and periodically (every cluster_hb_interval/10 seconds) send out ping packets containing the start time of the process to the multicast cluster_address.

A cluster master sends heartbeat rather than ping packets, which contain those session and tunnel changes since the last heartbeat.

When a slave has not seen a heartbeat within cluster_hb_timeout/10 seconds it "elects" a new master by examining the list of peers it has seen pings from and determines which of these and itself is the "best" candidate to be master. "Best" in this context means the server with the highest uptime (the highest IP address is used as a tie-breaker in the case of equal uptimes).

After discovering a master, and determining that it is up-to-date (has seen an update for all in-use sessions and tunnels from heartbeat packets) will raise a route (see Routing) for the bind_address and for all addresses/networks in ip_pool. Any packets recieved by the slave which would alter the session state, as well as packets for throttled or gardened sessions are forwarded to the master for handling. In addition, byte counters for session traffic are periodically forwarded.

A master, when determining that it has at least one up-to-date slave will drop all routes (raising them again if all slaves disappear) and subsequently handle only packets forwarded to it by the slaves.

Routing

If you are running a single instance, you may simply statically route the IP pools to the bind_address (l2tpns will send a gratuitous arp).

For a cluster, configure the members as BGP neighbours on your router and configure multi-path load-balancing. Cisco uses "maximum-paths ibgp" for IBGP. If this is not supported by your IOS revision, you can use "maximum-paths" (which works for EBGP) and set as_number to a private value such as 64512.

Performance

Performance is great.

I'd like to include some pretty graphs here that show a linear performance increase, with no impact by number of connected sessions.

That's really what it looks like.


David Parrish
l2tpns-users@lists.sourceforge.net