星期日, 11月 12, 2006

Howto: Configure Linux Virtual Local Area Network (VLAN)

Howto: Configure Linux Virtual Local Area Network (VLAN)

Posted by LinuxTitli in Linux, Networking

VLAN is an acronym for Virtual Local Area Network. Several VLANs can co-exist on a single physical switch, which are configured via software (Linux commands and configuration files) and not through hardware interface (you still need to configure switch).

Hubs or switch connects all nodes in a LAN and node can communicate without a router. For example, all nodes in LAN A can communicate with each other without the need for a router. If a node from LAN A wants to communicate with LAN B node, you need to use a router. Therefore, each LAN (A, B, C and so on) are separated using a router.

VLAN as a name suggest combine multiple LANs at once. But what are the advantages of VLAN?

  • Performance
  • Ease of management
  • Security
  • Trunks
  • You don't have to configure any hardware device, when physically moving server computer to another location etc.

VLAN concepts and fundamental discussion is beyond the scope of this article. I am reading following textbooks. I found these textbooks extremely useful and highly recommended:

  • Cisco CNNA ICND books (part I and II)
  • Andrew S. Tanenbaum, Computer Networks book

Configuration problems

I am lucky enough to get couple of hints from our internal wiki docs :D .

  • Not all network drivers support VLAN. You may need to patch your driver.
  • MTU may be another problem. It works by tagging each frame i.e. an Ethernet header extension that enlarges the header from 14 to 18 bytes. The VLAN tag contains the VLAN ID and priority. See Linux VLAN site for patches and other information.
  • Do not use VLAN ID 1 as it may be used for admin purpose.

Ok now I need to configure VLAN for RHEL. (note due to some other trouble tickets I was not able to configure VLAN today, but tomorrow afternoon after lunch break ill get my hands on dirty with Linux VLAN ;) )

VLAN Configuration

My VLAN ID is 5. So I need to copy file /etc/sysconfig/network-scripts/ifcfg-eth0 to /etc/sysconfig/network-scripts/ifcfg-eth0.5

# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0.5

So I have one network card (eth0) and it needs to use tagged network traffic for VLAN ID 5.

Above files will configure Linux system to have:

  • eth0 - Your regular network interface
  • eth0.5 - Your virtual interface that use untagged frames

Do not modify /etc/sysconfig/network-scripts/ifcfg-eth0 file. Now open file /etc/sysconfig/network-scripts/ifcfg-eth0.5 using vi text editor:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0.5

Find DEVICE=ifcfg-eth0line and replace with:

DEVICE=ifcfg-eth0.5

Append line:

VLAN=yes

Also make sure you assign correct IP address using DHCP or static IP. Save the file. Remove gateway entry from all other network config files. Only add gateway to /etc/sysconfig/network file.

Restart network:

# /etc/init.d/network restart

Please note that if you need to configure for VLAN ID 2 then copy the copy file /etc/sysconfig/network-scripts/ifcfg-eth0 to /etc/sysconfig/network-scripts/ifcfg-eth0.2 and do the above procedure again.

Using vconfig command

Above method is perfect and works with Red hat enterprise Linux w/o problem. However you will notice that there is a command called vconfig. The vconfig program allows you to create and remove vlan-devices on a vlan enabled kernel. Vlan-devices are virtual ethernet devices which represents the virtual lans on the physical lan.

Please note that this is yet another method of configuring VLAN. If you are happy with above method no need to follow following method.

Add VLAN ID 5 with follwing command for eth0:

# vconfig add eth0 5

add command creates a vlan-device on eth0 which result into eth0.5 interface. You can use normal ifconfig command to see device information:

# ifconfig eth0.5

Use ifconfig to assigne IP address:

# ifconfig eth0.5 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255 up

Get detailed information about VLAN interface:

# cat /proc/net/vlan/eth0.5

If you wish to delete VLAN interface delete command:

# ifconfig eth0.5 down
# vconfig rem eth0.5

If you enjoyed this article, grab our feed OR subscribe to the nixCraft email newsletter or use Technorati to track all updates.

NFSv4 delivers seamless network access


developerWorks

Level: Introductory

Frank Pohlmann (frank@linuxuser.co.uk), Linux user and developer, Freelance
Kenneth Hess (kenneth.hess@gmail.com), Linux user, advocate, and author, Freelance

12 Sep 2006

Network File System (NFS) has been part of the world of free operating systems and proprietary UNIX® flavors since the mid-1980s. But not all administrators know how it works or why there have been new releases. A knowledge of NFS is important simply because the system is vital for seamless access across UNIX networks. Learn how the latest release of NFS, NFSv4, has addressed many criticisms, particularly with regard to security problems, that became apparent in versions 2 and 3.

We take file systems for granted. We work on computers that give us access to printers, cameras, databases, remote sensors, telescopes, compilers, and mobile phones. These devices share few characteristics -- indeed, many of them became a reality only after the Internet became universal (for example, cameras and mobile phones that combine the functions of small computers). However, they all need file systems of some type to store and order data securely.

Typically, we don't really ask how the data, the applications consuming it, and the interfaces presenting the data to us are stored on the computers themselves. Most users would (not unjustifiably) regard a file system as the wall separating them from the bare metal storing bits and bytes. And the protocol stacks connecting file systems usually remain black boxes to most users and, indeed, programmers. Ultimately, however, internetworking all these devices amounts to enabling communication between file systems.

Networking file systems and other holy pursuits

In many ways, communication is little more than a long-distance copying of information. Network protocols were not the only means by which universal communications became possible. After all, every computer system must translate datagrams into something the operating system at the other end understands. TCP is a highly effective transmission protocol, but it's not optimized to facilitate fast access to files or to enable remote control of application software.

Distributed vs. networked computations

Traditional networking protocols don't have much to contribute to the way in which computations are distributed across computers and, indeed, networks. Only foolish programmers would rely on transmission protocols and fiber-optic cables to enable parallel computations. Instead, we typically rely on a serial model, in which link-level protocols take over after connections are initiated and have performed a rather complex greeting between network cards. Parallel computations and distributed file systems are no longer aware of IP or Ethernet. Today, we can safely disregard them as far as performance is concerned. However, security problems are a different matter.

One piece of the puzzle is the way file access is organized across a computer system. Now, it's irrelevant to the accessing system whether the accessed files are available on one or on several presumably rationally distributed computers. File system semantics and file system data structures are two very different topics these days. File system semantics on a Plan 9 installation or on an Andrew File System (AFS)-style distributed file system hide the way in which files are organized or how the file system maps to hardware and networks. NFS does not necessarily hide the way in which files and directories are stored on remote file systems, but it doesn't expose the actual hardware storing the file systems, directories, and files, either.



Back to top


NFS: A solution to a UNIX problem

Distributed file system access, therefore, needs rather more than a couple of commands enabling users to mount a directory on a computer networked to theirs. Sun Microsystems faced up to this challenge a number of years ago when it started propagating something called Remote Procedure Calls (RPCs) and NFS.

The basic problem that Sun was trying to solve was how to connect several UNIX computers to form a seamless distributed working environment without having to rewrite UNIX file system semantics and without having to add too many data structures specific to distributed file systems. Naturally, it was impossible for a network of UNIX workstations to appear as one large system: the integrity of each system had to be preserved while still enabling users to work on a directory on a different computer without experiencing unacceptable delays or limitations in their workflow.

To be sure, NFS does more than facilitate access to text files. You can distribute "runnable" applications through NFS, as well. Security procedures serve to shore up the network against the malicious takeovers of executables. But how exactly does this happen?

NFS is RPC

NFS is traditionally defined as an RPC application requiring TCP for the NFS server and either TCP or another network congestion-avoiding protocol for the NFS client. The Internet Engineering Task Force (IETF) has published the Request for Comments (RFC) for RPCs in RFC 1832. The other standard vital to the functioning of an NFS implementation describes data formats that NFS uses; it has been published in RFC 1831 as the "External Data Representation" (XDR) document.

Other RFCs are relevant to security and the encryption algorithms used to exchange authentication information during NFS sessions, but we focus on the basic mechanisms first. One protocol that concerns us is the Mount protocol, which is described in Appendix 1 of RFC 1813.

This RFC tells you which protocols make NFS work, but it doesn't tell you how NFS works today. You've already learned something important by knowing that NFS protocols have been documented as IETF standards. While the latest NFS release was stuck at version 3, RPCs had not progressed beyond the informational RFC stage and thus were perceived as an interest largely confined to Sun Microsystems' admittedly huge engineering task force and proprietary UNIX variety. Sun NFS has been around in several versions since 1985 and, therefore, predates most current file system flavors by several years. Sun Microsystems turned over control of NFS to the IETF in 1998, and most NSF version 4 (NFSv4) activity occurred under the latter's aegis.

So, if you're dealing with RPC and NFS today, you're dealing with a version that reflects the concerns of companies and interest groups outside Sun's influence. Many Sun engineers, however, retain a deep interest in NFS development



Back to top


NFS version 3

NFS in its version 3 avatar (NFSv3) was not stateful: NFSv4 is. This fundamental statement is unlikely to raise any hackles today, although the TCP/IP world on which NFS builds has mostly been stateless -- a fact that has helped traffic analysis and security software companies do quite well for themselves.

NFSv3 had to rely on several subsidiary protocols to seamlessly mount directories on remote computers without becoming too dependent on underlying file system mechanisms. NFS has not always been successful in this attempt. To give you a better example, the Mount protocol called the initial file handle, while the Network Lock Manager protocol addressed file locking. Both operations required state, which NFSv3 did not provide. Therefore, you have complex interactions between protocol layers that do not reflect similar data-flow mechanisms. Now, if you add the fact that file and directory creation in Microsoft® Windows® works very differently from UNIX, matters become rather complicated.

NFSv3 had to use several ports to accommodate some of its subsidiary protocols, and you get a rather complex picture of ports and protocol layers and all their attendant security concerns. Today, this model of operation has been abandoned, and all operations that subsidiary protocol implementations previously executed from individual ports are now handled by NFSv4 from a single, well-known port.

NFSv3 was also ready for Unicode-enabled file system operation -- an advantage that until the late 1990s had to remain fairly theoretical. In all, it mapped well to UNIX file system semantics and motivated competing distributed file system implementations like AFS and Samba. Not surprisingly, Windows support was poor, but Samba file servers have since addressed file sharing between UNIX and Windows systems.



Back to top


NFS version 4

NFSv4 is, as we pointed out, stateful. Several radical changes made this behavior possible. We already mentioned that subsidiary protocols must be called, as user-level processes have been abandoned. Instead, every file-opening operation and quite a few RPC calls are turned into kernel-level file system operations.

All NFS versions defined each unit of work in terms of RPC client and server operations. Each NFSv3 request required a fairly generous number of RPC calls and port-opening calls to yield a result. Version 4 simplifies matters by introducing a so-called compound operation that subsumed a large number of file system object operations. The immediate effect is, of course, that far fewer RPC calls and data have to traverse the network, even though each RPC call carries substantially more data while accomplishing far more. It is estimated that NFSv3 RPC calls required five times the number of client-server interactions that NFSv4 compound RPC procedures demand.

RPC is not really that important anymore and essentially serves as a wrapper around the number of operations encapsulated within the NFSv4 stack. This change also makes the protocol stack far less dependent on the underlying file system semantics. But the changes don't mean that the file system operations of other operating systems were neglected: For example, Windows shares require stateful open calls. Statefulness not only helps traffic analysis but, when included in file system semantics, makes file system operations much more traceable. Stateful open calls enable clients to cache file data and state -- something that would otherwise have to happen on the server. In the real world, where Windows clients are ubiquitous, NFS servers that work seamlessly and transparently with Windows shares are worth the time you'll spend customizing your NFS configuration.



Back to top


Using NFS

NFS setup is generically similar to Samba. On the server side, you define file systems or directories to export, or share; the client side mounts those shared directories. When a remote client mounts an NFS-shared directory, that directory is accessed in the same way as any other local file system. Setting up NFS from the server side is an equally simple process. Minimally, you must create or edit the /etc/exports file and start the NFS daemon. To set up a more secure NFS service, you must also edit /etc/hosts.allow and /etc/hosts.deny. The client side of NFS requires only the mount command. For more information and options, consult the Linux® man pages.

The NFS server

Entries in the /etc/exports file have a straightforward format. To share a file system, edit the /etc/exports file and supply a file system (with options) in the general format:

directory (or file system)   client1 (option1, option2) client2 (option1, option2)

General options

Several general options are available to help you customize your NFS implementation. They include:

  • secure: This option -- the default -- uses available TCP/IP ports below 1024 for NFS connections. Specifying insecure disables this option.
  • rw: This option allows NFS clients read/write access. The default option is read only.
  • async: This option may improve performance, but it can also cause data loss if you restart the NFS server without first performing a clean shutdown of the NFS daemon. The default setting is sync.
  • no_wdelay: This option turns off the write delay. If you set async, NFS ignores this option.
  • nohide: If you mount one directory over another, the old directory is typically hidden or appears empty. To disable this behavior, enable the hide option.
  • no_subtree_check: This option turns off subtree checking, which performs some security checks that you may not want to bypass. The default option is to have subtree checks enabled.
  • no_auth_nlm: This option, also specified as insecure_locks, tells the NFS daemon not to authenticate locking requests. If you're concerned about security, avoid this option. The default option is auth_nlm or secure_locks.
  • mp (mountpoint=path): By explicitly declaring this option, NSF requires that the exported directory be mounted.
  • fsid=num: This option is typically used in NFS failover scenarios. Refer to the NFS documentation if you want to implement NFS failover.

User mapping

Through user mapping in NFS, you can grant pseudo or actual user and group identity to a user working on an NFS volume. The NFS user has the user and group permissions that the mapping allows. Using a generic user and group for NFS volumes provides a layer of security and flexibility without a lot of administrative overhead.

User access is typically "squashed" when using files on an NFS-mounted file system, which means that a user accesses files as an anonymous user who, by default, has read-only permissions to those files. This behavior is especially important for the root user. Cases exist, however, in which you want a user to access files on a remote system as root or some other defined user. NFS allows you to specify a user -- by user identification (UID) number and group identification (GID) number -- to access remote files, and you can disable the normal behavior of squashing.

User mapping options include:

  • root_squash: This option doesn't allow root user access on the mounted NFS volume.
  • no_root_squash: This option allows root user access on the mounted NFS volume.
  • all_squash: This option, which is useful for a publicly accessible NFS volume, squashes all UIDs and GIDs and only uses the anonymous account. The default setting is no_all_squash.
  • anonuid and anongid: These options change the anonymous UIDs and GIDs to specific user and group accounts.

Listing 1 shows examples of /etc/exports entries.


Listing 1. Example /etc/exports entries
 
/opt/files 192.168.0.*
/opt/files 192.168.0.120
/opt/files 192.168.0.125(rw, all_squash, anonuid=210, anongid=100)
/opt/files *(ro, insecure, all_squash)

The first entry exports the /opt/files directory to all hosts in the 192.168.0 network. The next entry exports /opt/files to a single host: 192.168.0.120. The third entry specifies host 192.168.0.125 and grants read/write access to the files with user permissions of user id=210 and group id=100. The final entry is for a "public" directory that has read-only access and allows access only under the anonymous account.

The NFS client

A word of caution

After you have used NFS to mount a remote file system, that system will also be part of any total system backup that you perform on the client system. This behavior can have potentially disastrous results if you don't exclude the newly mounted directories from the backup.

To use NFS as a client, the client computer must be running rpc.statd and portmap. You can run a quick ps -ef to check for these two daemons. If they are running (and they should be), you can mount the server's exported directory with the generic command:

mount server:directory  local mount point

Generally speaking, you must be running under root to mount a file system. From a remote computer, you can use the following command (assume that the NFS server has an IP address of 192.168.0.100):

mount 192.168.0.100:/opt/files  /mnt

Your distribution may require you to specify the file system type when mounting a file system. If so, run the command:

mount -t nfs 192.168.0.100:/opt/files /mnt

The remote directory should mount without issue if you've set up the server side correctly. Now, run the cd command to the /mnt directory, then run the ls command to see the files. To make this mount permanent, you must edit the /etc/fstab file and create an entry similar to the following:

192.168.0.100:/opt/files  /mnt  nfs  rw  0  0

Note: Refer to the fstab man page for more information on /etc/fstab entries.



Back to top


NFS criticisms

Criticism drives improvement

Criticisms leveled at NFS security have been at the root of many improvements in NSFv4. The designers of the new version took positive measures to strengthen the security of NFS client-server interaction. In fact, they decided to include a whole new security model.

To understand the security model, you should familiarize yourself with something called the Generic Security Services application programming interface (GSS-API) version 2, update 1. The GSS-API is fully described in RFC 2743, which, unfortunately, is among the most difficult RFCs to understand.

We know from our experience with NFSv4 that it's not easy to make the network file system operating system independent. But it's even more difficult to make all areas of security operating systems and network protocols independent. We must have both, because NFS must be able to handle a fairly generous number of user operations, and it must do so without much reference to the specifics of network protocol interaction.

Connections between NFS clients and servers are secured through what has been rather superficially called strong RPC security. NFSv4 uses the Open Network Computing Remote Procedure Call (ONCRPC) standard codified in RFC 1831. The security model had to be strengthened, and instead of relying on simple authentication (known as AUTH_SYS), a GSS-API-based security flavor known as RPCSEC_GSS has been defined and implemented as a mandatory part of NFSv4. The most important security mechanisms available under NFSv4 include Kerberos version 5 and LIPKEY.

Given that Kerberos has limitations when used across the Internet, LIPKEY has the pleasant advantage of working like Secure Sockets Layer (SSL), prompting users for their user names and passwords, while avoiding the TCP dependence of SSL -- a dependence that NFSv4 doesn't share. You can set NFS up to negotiate for security flavors if RPCSEC_GSS is not required. Past NFS versions did not have this ability and therefore could not negotiate for the quality of protection, data integrity, the requirement for authentication, or the type of encryption.

NFSv3 had come in for a substantial amount of criticism in the area of security. Given that NFSv3 servers ran on TCP, it was perfectly possible to run NFSv3 networks across the Internet. Unfortunately, it was also necessary to open several ports, which led to several well-publicized security breaches. By making port 2049 mandatory for NFS, it became possible to use NFSv4 across firewalls without having to pay too much attention to what ports other protocols, such as the Mount protocol, were listening to. Therefore, the elimination of the Mount protocol had multiple positive effects:

  • Mandatory strong authentication mechanisms: NFSv4 makes strong authentication mechanisms mandatory. Kerberos flavors are fairly common, and Lower Infrastructure Public Key Mechanism (LIPKEY) must be supported, as well. NFSv3 never supported much more than UNIX-style standard encryption to authenticate access -- something that led to major security problems in large networks.
  • Mandatory Microsoft Windows NT-style access control list (ACL) schemes: Although NFSv3 allowed for strong encryption for authentication, it did not push Windows NT-style ACL access schemes. Portable Operating System Interface (POSIX)-style ACLs were sometimes implemented but never widely adopted. NFSv4 makes Windows NT-style ACL schemes mandatory.
  • Negotiated authentication styles and mechanisms: NFSv4 makes it possible to negotiate authentication styles and mechanisms. Under NSFv3, it was impossible to do much more than determine manually which encryption styles were used. The system administrator then had to harmonize encryption and security protocols.

Is NFS still without peers?

NFSv4 is replacing NFSv3 on most UNIX and Linux systems. As a network file system, NSFv4 has few competitors. The Common Internet File System (CIFS)/Server Message Block (SMB) could be considered a viable competitor given that it's native to all Windows varieties and (today) to Linux. AFS never made much commercial impact, and it emphasized elements of distributed file systems that made data migration and replication easier.

Production-ready Linux versions of NFS had been around since the kernel reached version 2.2, but one of the more common failings of Linux kernel versions was the fact that Linux adopted NFSv3 fairly late. In fact, it took a long time before Linux fully supported NSFv3. When NSFv4 came along, this lack was addressed quickly, and it wasn't just Solaris, AIX, and FreeBSD that enjoyed full NSFv4 support.

NFS is considered a mature technology today, and it has a fairly big advantage: It's secure and usable, and most users find it convenient to use one secure logon to access a network and its facilities, even when files and applications reside on different systems. Although this might look like a disadvantage compared to distributed file systems, which hide system structures from users, don't forget that many applications use files from different operating systems and, therefore, computers. NFS makes it easy to work on different operating systems without having to worry too much about the file system semantics and their performance characteristics.



Back to top


Resources

Learn

Get products and technologies
  • OpenAFS is the open source version of AFS, another distributed file system.

  • SAMBA can be regarded as a file system and can fulfill some of the roles of NFS.



Back to top


About the authors

Frank Pohlmann

Frank Pohlmann dabbled in the history of Middle Eastern religions before various funding committees decided that research in the history of religious polemics was quite irrelevant to the modern world. He has focused on his hobby -- free software -- ever since. He admits to being the technical editor of the U.K.-based LinuxUser and Developer.


Ken Hess author photo

Ken Hess is a long-time Linux user and enthusiast. He started the Linux User's Group in Tulsa, Oklahoma, in 1996 and writes on a variety of Linux and open source topics. Ken stays busy with his day job, his family, and his art.