星期日, 12月 17, 2006

Linux server memory check

Posted by nixcraft in Linux, Troubleshooting, Sys admin, Tips

If your server crashes regularly it could be a buggy kernel, a driver, power supply or any other hardware part. Memory (RAM) is one of the critical server parts. Bad memory can cause various problems such as random Linux server restart or program segfaults.

Generally, I recommend using memtester command. It is an effective userspace tester for stress-testing the memory subsystem. It is very effective at finding intermittent and non deterministic faults under Linux.

Recently Rahul shah email me another interesting method for testing memory. His idea is based upon md5 checksum and dd command.

First find out memory site using free command.
$ free
Output:

 total       used       free     shared    buffers     cached
Mem: 768304 555616 212688 0 22012 270996
-/+ buffers/cache: 262608 505696
Swap: 979956 0 979956

In above example my server has 768304K memory. Now use dd command as follows:
$ dd if=/dev/urandom bs=768304 of=/tmp/memtest count=1050
$ md5sum /tmp/memtest; md5sum /tmp/memtest; md5sum /tmp/memtest

According to him if the checksums do not match, you have faulty memory guaranteed. Read dd command man page to understand all options. dd will create /tmp/memtest file. It will cache data in memory by filling up all memory during read operation. Using md5sum command you are reading same data from memory (as it was cached).

Look like a good hack to me. However I still recommend using memtester userland program. Another option is to use memtest86 program ISO. Download ISO, burn the same on a CD, reboot your system with it test it (it may take more time). From project home page:
Memtest86 is thorough, stand alone memory test for x86 architecture computers. BIOS based memory tests are a quick, cursory check and often miss many of the failures that are detected by Memtest86.

星期日, 11月 12, 2006

Howto: Configure Linux Virtual Local Area Network (VLAN)

Howto: Configure Linux Virtual Local Area Network (VLAN)

Posted by LinuxTitli in Linux, Networking

VLAN is an acronym for Virtual Local Area Network. Several VLANs can co-exist on a single physical switch, which are configured via software (Linux commands and configuration files) and not through hardware interface (you still need to configure switch).

Hubs or switch connects all nodes in a LAN and node can communicate without a router. For example, all nodes in LAN A can communicate with each other without the need for a router. If a node from LAN A wants to communicate with LAN B node, you need to use a router. Therefore, each LAN (A, B, C and so on) are separated using a router.

VLAN as a name suggest combine multiple LANs at once. But what are the advantages of VLAN?

  • Performance
  • Ease of management
  • Security
  • Trunks
  • You don't have to configure any hardware device, when physically moving server computer to another location etc.

VLAN concepts and fundamental discussion is beyond the scope of this article. I am reading following textbooks. I found these textbooks extremely useful and highly recommended:

  • Cisco CNNA ICND books (part I and II)
  • Andrew S. Tanenbaum, Computer Networks book

Configuration problems

I am lucky enough to get couple of hints from our internal wiki docs :D .

  • Not all network drivers support VLAN. You may need to patch your driver.
  • MTU may be another problem. It works by tagging each frame i.e. an Ethernet header extension that enlarges the header from 14 to 18 bytes. The VLAN tag contains the VLAN ID and priority. See Linux VLAN site for patches and other information.
  • Do not use VLAN ID 1 as it may be used for admin purpose.

Ok now I need to configure VLAN for RHEL. (note due to some other trouble tickets I was not able to configure VLAN today, but tomorrow afternoon after lunch break ill get my hands on dirty with Linux VLAN ;) )

VLAN Configuration

My VLAN ID is 5. So I need to copy file /etc/sysconfig/network-scripts/ifcfg-eth0 to /etc/sysconfig/network-scripts/ifcfg-eth0.5

# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0.5

So I have one network card (eth0) and it needs to use tagged network traffic for VLAN ID 5.

Above files will configure Linux system to have:

  • eth0 - Your regular network interface
  • eth0.5 - Your virtual interface that use untagged frames

Do not modify /etc/sysconfig/network-scripts/ifcfg-eth0 file. Now open file /etc/sysconfig/network-scripts/ifcfg-eth0.5 using vi text editor:

# vi /etc/sysconfig/network-scripts/ifcfg-eth0.5

Find DEVICE=ifcfg-eth0line and replace with:

DEVICE=ifcfg-eth0.5

Append line:

VLAN=yes

Also make sure you assign correct IP address using DHCP or static IP. Save the file. Remove gateway entry from all other network config files. Only add gateway to /etc/sysconfig/network file.

Restart network:

# /etc/init.d/network restart

Please note that if you need to configure for VLAN ID 2 then copy the copy file /etc/sysconfig/network-scripts/ifcfg-eth0 to /etc/sysconfig/network-scripts/ifcfg-eth0.2 and do the above procedure again.

Using vconfig command

Above method is perfect and works with Red hat enterprise Linux w/o problem. However you will notice that there is a command called vconfig. The vconfig program allows you to create and remove vlan-devices on a vlan enabled kernel. Vlan-devices are virtual ethernet devices which represents the virtual lans on the physical lan.

Please note that this is yet another method of configuring VLAN. If you are happy with above method no need to follow following method.

Add VLAN ID 5 with follwing command for eth0:

# vconfig add eth0 5

add command creates a vlan-device on eth0 which result into eth0.5 interface. You can use normal ifconfig command to see device information:

# ifconfig eth0.5

Use ifconfig to assigne IP address:

# ifconfig eth0.5 192.168.1.100 netmask 255.255.255.0 broadcast 192.168.1.255 up

Get detailed information about VLAN interface:

# cat /proc/net/vlan/eth0.5

If you wish to delete VLAN interface delete command:

# ifconfig eth0.5 down
# vconfig rem eth0.5

If you enjoyed this article, grab our feed OR subscribe to the nixCraft email newsletter or use Technorati to track all updates.

NFSv4 delivers seamless network access


developerWorks

Level: Introductory

Frank Pohlmann (frank@linuxuser.co.uk), Linux user and developer, Freelance
Kenneth Hess (kenneth.hess@gmail.com), Linux user, advocate, and author, Freelance

12 Sep 2006

Network File System (NFS) has been part of the world of free operating systems and proprietary UNIX® flavors since the mid-1980s. But not all administrators know how it works or why there have been new releases. A knowledge of NFS is important simply because the system is vital for seamless access across UNIX networks. Learn how the latest release of NFS, NFSv4, has addressed many criticisms, particularly with regard to security problems, that became apparent in versions 2 and 3.

We take file systems for granted. We work on computers that give us access to printers, cameras, databases, remote sensors, telescopes, compilers, and mobile phones. These devices share few characteristics -- indeed, many of them became a reality only after the Internet became universal (for example, cameras and mobile phones that combine the functions of small computers). However, they all need file systems of some type to store and order data securely.

Typically, we don't really ask how the data, the applications consuming it, and the interfaces presenting the data to us are stored on the computers themselves. Most users would (not unjustifiably) regard a file system as the wall separating them from the bare metal storing bits and bytes. And the protocol stacks connecting file systems usually remain black boxes to most users and, indeed, programmers. Ultimately, however, internetworking all these devices amounts to enabling communication between file systems.

Networking file systems and other holy pursuits

In many ways, communication is little more than a long-distance copying of information. Network protocols were not the only means by which universal communications became possible. After all, every computer system must translate datagrams into something the operating system at the other end understands. TCP is a highly effective transmission protocol, but it's not optimized to facilitate fast access to files or to enable remote control of application software.

Distributed vs. networked computations

Traditional networking protocols don't have much to contribute to the way in which computations are distributed across computers and, indeed, networks. Only foolish programmers would rely on transmission protocols and fiber-optic cables to enable parallel computations. Instead, we typically rely on a serial model, in which link-level protocols take over after connections are initiated and have performed a rather complex greeting between network cards. Parallel computations and distributed file systems are no longer aware of IP or Ethernet. Today, we can safely disregard them as far as performance is concerned. However, security problems are a different matter.

One piece of the puzzle is the way file access is organized across a computer system. Now, it's irrelevant to the accessing system whether the accessed files are available on one or on several presumably rationally distributed computers. File system semantics and file system data structures are two very different topics these days. File system semantics on a Plan 9 installation or on an Andrew File System (AFS)-style distributed file system hide the way in which files are organized or how the file system maps to hardware and networks. NFS does not necessarily hide the way in which files and directories are stored on remote file systems, but it doesn't expose the actual hardware storing the file systems, directories, and files, either.



Back to top


NFS: A solution to a UNIX problem

Distributed file system access, therefore, needs rather more than a couple of commands enabling users to mount a directory on a computer networked to theirs. Sun Microsystems faced up to this challenge a number of years ago when it started propagating something called Remote Procedure Calls (RPCs) and NFS.

The basic problem that Sun was trying to solve was how to connect several UNIX computers to form a seamless distributed working environment without having to rewrite UNIX file system semantics and without having to add too many data structures specific to distributed file systems. Naturally, it was impossible for a network of UNIX workstations to appear as one large system: the integrity of each system had to be preserved while still enabling users to work on a directory on a different computer without experiencing unacceptable delays or limitations in their workflow.

To be sure, NFS does more than facilitate access to text files. You can distribute "runnable" applications through NFS, as well. Security procedures serve to shore up the network against the malicious takeovers of executables. But how exactly does this happen?

NFS is RPC

NFS is traditionally defined as an RPC application requiring TCP for the NFS server and either TCP or another network congestion-avoiding protocol for the NFS client. The Internet Engineering Task Force (IETF) has published the Request for Comments (RFC) for RPCs in RFC 1832. The other standard vital to the functioning of an NFS implementation describes data formats that NFS uses; it has been published in RFC 1831 as the "External Data Representation" (XDR) document.

Other RFCs are relevant to security and the encryption algorithms used to exchange authentication information during NFS sessions, but we focus on the basic mechanisms first. One protocol that concerns us is the Mount protocol, which is described in Appendix 1 of RFC 1813.

This RFC tells you which protocols make NFS work, but it doesn't tell you how NFS works today. You've already learned something important by knowing that NFS protocols have been documented as IETF standards. While the latest NFS release was stuck at version 3, RPCs had not progressed beyond the informational RFC stage and thus were perceived as an interest largely confined to Sun Microsystems' admittedly huge engineering task force and proprietary UNIX variety. Sun NFS has been around in several versions since 1985 and, therefore, predates most current file system flavors by several years. Sun Microsystems turned over control of NFS to the IETF in 1998, and most NSF version 4 (NFSv4) activity occurred under the latter's aegis.

So, if you're dealing with RPC and NFS today, you're dealing with a version that reflects the concerns of companies and interest groups outside Sun's influence. Many Sun engineers, however, retain a deep interest in NFS development



Back to top


NFS version 3

NFS in its version 3 avatar (NFSv3) was not stateful: NFSv4 is. This fundamental statement is unlikely to raise any hackles today, although the TCP/IP world on which NFS builds has mostly been stateless -- a fact that has helped traffic analysis and security software companies do quite well for themselves.

NFSv3 had to rely on several subsidiary protocols to seamlessly mount directories on remote computers without becoming too dependent on underlying file system mechanisms. NFS has not always been successful in this attempt. To give you a better example, the Mount protocol called the initial file handle, while the Network Lock Manager protocol addressed file locking. Both operations required state, which NFSv3 did not provide. Therefore, you have complex interactions between protocol layers that do not reflect similar data-flow mechanisms. Now, if you add the fact that file and directory creation in Microsoft® Windows® works very differently from UNIX, matters become rather complicated.

NFSv3 had to use several ports to accommodate some of its subsidiary protocols, and you get a rather complex picture of ports and protocol layers and all their attendant security concerns. Today, this model of operation has been abandoned, and all operations that subsidiary protocol implementations previously executed from individual ports are now handled by NFSv4 from a single, well-known port.

NFSv3 was also ready for Unicode-enabled file system operation -- an advantage that until the late 1990s had to remain fairly theoretical. In all, it mapped well to UNIX file system semantics and motivated competing distributed file system implementations like AFS and Samba. Not surprisingly, Windows support was poor, but Samba file servers have since addressed file sharing between UNIX and Windows systems.



Back to top


NFS version 4

NFSv4 is, as we pointed out, stateful. Several radical changes made this behavior possible. We already mentioned that subsidiary protocols must be called, as user-level processes have been abandoned. Instead, every file-opening operation and quite a few RPC calls are turned into kernel-level file system operations.

All NFS versions defined each unit of work in terms of RPC client and server operations. Each NFSv3 request required a fairly generous number of RPC calls and port-opening calls to yield a result. Version 4 simplifies matters by introducing a so-called compound operation that subsumed a large number of file system object operations. The immediate effect is, of course, that far fewer RPC calls and data have to traverse the network, even though each RPC call carries substantially more data while accomplishing far more. It is estimated that NFSv3 RPC calls required five times the number of client-server interactions that NFSv4 compound RPC procedures demand.

RPC is not really that important anymore and essentially serves as a wrapper around the number of operations encapsulated within the NFSv4 stack. This change also makes the protocol stack far less dependent on the underlying file system semantics. But the changes don't mean that the file system operations of other operating systems were neglected: For example, Windows shares require stateful open calls. Statefulness not only helps traffic analysis but, when included in file system semantics, makes file system operations much more traceable. Stateful open calls enable clients to cache file data and state -- something that would otherwise have to happen on the server. In the real world, where Windows clients are ubiquitous, NFS servers that work seamlessly and transparently with Windows shares are worth the time you'll spend customizing your NFS configuration.



Back to top


Using NFS

NFS setup is generically similar to Samba. On the server side, you define file systems or directories to export, or share; the client side mounts those shared directories. When a remote client mounts an NFS-shared directory, that directory is accessed in the same way as any other local file system. Setting up NFS from the server side is an equally simple process. Minimally, you must create or edit the /etc/exports file and start the NFS daemon. To set up a more secure NFS service, you must also edit /etc/hosts.allow and /etc/hosts.deny. The client side of NFS requires only the mount command. For more information and options, consult the Linux® man pages.

The NFS server

Entries in the /etc/exports file have a straightforward format. To share a file system, edit the /etc/exports file and supply a file system (with options) in the general format:

directory (or file system)   client1 (option1, option2) client2 (option1, option2)

General options

Several general options are available to help you customize your NFS implementation. They include:

  • secure: This option -- the default -- uses available TCP/IP ports below 1024 for NFS connections. Specifying insecure disables this option.
  • rw: This option allows NFS clients read/write access. The default option is read only.
  • async: This option may improve performance, but it can also cause data loss if you restart the NFS server without first performing a clean shutdown of the NFS daemon. The default setting is sync.
  • no_wdelay: This option turns off the write delay. If you set async, NFS ignores this option.
  • nohide: If you mount one directory over another, the old directory is typically hidden or appears empty. To disable this behavior, enable the hide option.
  • no_subtree_check: This option turns off subtree checking, which performs some security checks that you may not want to bypass. The default option is to have subtree checks enabled.
  • no_auth_nlm: This option, also specified as insecure_locks, tells the NFS daemon not to authenticate locking requests. If you're concerned about security, avoid this option. The default option is auth_nlm or secure_locks.
  • mp (mountpoint=path): By explicitly declaring this option, NSF requires that the exported directory be mounted.
  • fsid=num: This option is typically used in NFS failover scenarios. Refer to the NFS documentation if you want to implement NFS failover.

User mapping

Through user mapping in NFS, you can grant pseudo or actual user and group identity to a user working on an NFS volume. The NFS user has the user and group permissions that the mapping allows. Using a generic user and group for NFS volumes provides a layer of security and flexibility without a lot of administrative overhead.

User access is typically "squashed" when using files on an NFS-mounted file system, which means that a user accesses files as an anonymous user who, by default, has read-only permissions to those files. This behavior is especially important for the root user. Cases exist, however, in which you want a user to access files on a remote system as root or some other defined user. NFS allows you to specify a user -- by user identification (UID) number and group identification (GID) number -- to access remote files, and you can disable the normal behavior of squashing.

User mapping options include:

  • root_squash: This option doesn't allow root user access on the mounted NFS volume.
  • no_root_squash: This option allows root user access on the mounted NFS volume.
  • all_squash: This option, which is useful for a publicly accessible NFS volume, squashes all UIDs and GIDs and only uses the anonymous account. The default setting is no_all_squash.
  • anonuid and anongid: These options change the anonymous UIDs and GIDs to specific user and group accounts.

Listing 1 shows examples of /etc/exports entries.


Listing 1. Example /etc/exports entries
 
/opt/files 192.168.0.*
/opt/files 192.168.0.120
/opt/files 192.168.0.125(rw, all_squash, anonuid=210, anongid=100)
/opt/files *(ro, insecure, all_squash)

The first entry exports the /opt/files directory to all hosts in the 192.168.0 network. The next entry exports /opt/files to a single host: 192.168.0.120. The third entry specifies host 192.168.0.125 and grants read/write access to the files with user permissions of user id=210 and group id=100. The final entry is for a "public" directory that has read-only access and allows access only under the anonymous account.

The NFS client

A word of caution

After you have used NFS to mount a remote file system, that system will also be part of any total system backup that you perform on the client system. This behavior can have potentially disastrous results if you don't exclude the newly mounted directories from the backup.

To use NFS as a client, the client computer must be running rpc.statd and portmap. You can run a quick ps -ef to check for these two daemons. If they are running (and they should be), you can mount the server's exported directory with the generic command:

mount server:directory  local mount point

Generally speaking, you must be running under root to mount a file system. From a remote computer, you can use the following command (assume that the NFS server has an IP address of 192.168.0.100):

mount 192.168.0.100:/opt/files  /mnt

Your distribution may require you to specify the file system type when mounting a file system. If so, run the command:

mount -t nfs 192.168.0.100:/opt/files /mnt

The remote directory should mount without issue if you've set up the server side correctly. Now, run the cd command to the /mnt directory, then run the ls command to see the files. To make this mount permanent, you must edit the /etc/fstab file and create an entry similar to the following:

192.168.0.100:/opt/files  /mnt  nfs  rw  0  0

Note: Refer to the fstab man page for more information on /etc/fstab entries.



Back to top


NFS criticisms

Criticism drives improvement

Criticisms leveled at NFS security have been at the root of many improvements in NSFv4. The designers of the new version took positive measures to strengthen the security of NFS client-server interaction. In fact, they decided to include a whole new security model.

To understand the security model, you should familiarize yourself with something called the Generic Security Services application programming interface (GSS-API) version 2, update 1. The GSS-API is fully described in RFC 2743, which, unfortunately, is among the most difficult RFCs to understand.

We know from our experience with NFSv4 that it's not easy to make the network file system operating system independent. But it's even more difficult to make all areas of security operating systems and network protocols independent. We must have both, because NFS must be able to handle a fairly generous number of user operations, and it must do so without much reference to the specifics of network protocol interaction.

Connections between NFS clients and servers are secured through what has been rather superficially called strong RPC security. NFSv4 uses the Open Network Computing Remote Procedure Call (ONCRPC) standard codified in RFC 1831. The security model had to be strengthened, and instead of relying on simple authentication (known as AUTH_SYS), a GSS-API-based security flavor known as RPCSEC_GSS has been defined and implemented as a mandatory part of NFSv4. The most important security mechanisms available under NFSv4 include Kerberos version 5 and LIPKEY.

Given that Kerberos has limitations when used across the Internet, LIPKEY has the pleasant advantage of working like Secure Sockets Layer (SSL), prompting users for their user names and passwords, while avoiding the TCP dependence of SSL -- a dependence that NFSv4 doesn't share. You can set NFS up to negotiate for security flavors if RPCSEC_GSS is not required. Past NFS versions did not have this ability and therefore could not negotiate for the quality of protection, data integrity, the requirement for authentication, or the type of encryption.

NFSv3 had come in for a substantial amount of criticism in the area of security. Given that NFSv3 servers ran on TCP, it was perfectly possible to run NFSv3 networks across the Internet. Unfortunately, it was also necessary to open several ports, which led to several well-publicized security breaches. By making port 2049 mandatory for NFS, it became possible to use NFSv4 across firewalls without having to pay too much attention to what ports other protocols, such as the Mount protocol, were listening to. Therefore, the elimination of the Mount protocol had multiple positive effects:

  • Mandatory strong authentication mechanisms: NFSv4 makes strong authentication mechanisms mandatory. Kerberos flavors are fairly common, and Lower Infrastructure Public Key Mechanism (LIPKEY) must be supported, as well. NFSv3 never supported much more than UNIX-style standard encryption to authenticate access -- something that led to major security problems in large networks.
  • Mandatory Microsoft Windows NT-style access control list (ACL) schemes: Although NFSv3 allowed for strong encryption for authentication, it did not push Windows NT-style ACL access schemes. Portable Operating System Interface (POSIX)-style ACLs were sometimes implemented but never widely adopted. NFSv4 makes Windows NT-style ACL schemes mandatory.
  • Negotiated authentication styles and mechanisms: NFSv4 makes it possible to negotiate authentication styles and mechanisms. Under NSFv3, it was impossible to do much more than determine manually which encryption styles were used. The system administrator then had to harmonize encryption and security protocols.

Is NFS still without peers?

NFSv4 is replacing NFSv3 on most UNIX and Linux systems. As a network file system, NSFv4 has few competitors. The Common Internet File System (CIFS)/Server Message Block (SMB) could be considered a viable competitor given that it's native to all Windows varieties and (today) to Linux. AFS never made much commercial impact, and it emphasized elements of distributed file systems that made data migration and replication easier.

Production-ready Linux versions of NFS had been around since the kernel reached version 2.2, but one of the more common failings of Linux kernel versions was the fact that Linux adopted NFSv3 fairly late. In fact, it took a long time before Linux fully supported NSFv3. When NSFv4 came along, this lack was addressed quickly, and it wasn't just Solaris, AIX, and FreeBSD that enjoyed full NSFv4 support.

NFS is considered a mature technology today, and it has a fairly big advantage: It's secure and usable, and most users find it convenient to use one secure logon to access a network and its facilities, even when files and applications reside on different systems. Although this might look like a disadvantage compared to distributed file systems, which hide system structures from users, don't forget that many applications use files from different operating systems and, therefore, computers. NFS makes it easy to work on different operating systems without having to worry too much about the file system semantics and their performance characteristics.



Back to top


Resources

Learn

Get products and technologies
  • OpenAFS is the open source version of AFS, another distributed file system.

  • SAMBA can be regarded as a file system and can fulfill some of the roles of NFS.



Back to top


About the authors

Frank Pohlmann

Frank Pohlmann dabbled in the history of Middle Eastern religions before various funding committees decided that research in the history of religious polemics was quite irrelevant to the modern world. He has focused on his hobby -- free software -- ever since. He admits to being the technical editor of the U.K.-based LinuxUser and Developer.


Ken Hess author photo

Ken Hess is a long-time Linux user and enthusiast. He started the Linux User's Group in Tulsa, Oklahoma, in 1996 and writes on a variety of Linux and open source topics. Ken stays busy with his day job, his family, and his art.

星期四, 10月 12, 2006

Bash 提示五則

Bash 提示五則


這是我所見過的 Bash 提示當中非常 Cool 的幾個,使用它們能夠讓你充分地享受到 CLI 的高效,並免除重複輸入的麻煩,從而節省大量地時間。

  1. 清屏

    一般來講,為了清屏,我們通常使用 clear 命令。你有沒有試過它的快捷鍵 Ctrl+L?個人認為使用組合鍵操作更快捷。

  2. 逆向搜索

    有時候我們需要重新執行先前輸入的命令。那麼,在使用快捷鍵 Ctrl+R 後輸入命令,Bash 將為你自動完成。

  3. 命令置換

    誰都避免不了輸入錯誤命令的情況,不要緊,可以使用 ^texttosobstitute^sobstitution 來置換。比如,你輸入了一個 sudo apt-get updkte 的錯誤命令,Bash 當然無法執行它了,這時可以通過輸入 ^updkte^update(或 ^k^a)來糾正錯誤。

  4. 重複上次的操作

    如果你想要重複執行上次的命令,那麼只需輸入 !! 即可。

  5. 重複上次的參數

    如果你想要重複使用上次所用命令的參數,則可以使用 !$。舉個例子,假如你上次執行的命令為 ls -lsh,那麼,現在可以用 ls !$ 來達到同樣的目的。

(Via kratorius::code, thanks!)

星期一, 10月 09, 2006

在 Windows* XP 下全自動安裝說明

在 Windows* XP 下全自動安裝說明

全自動安裝方法適用於 Microsoft* Windows* XP、Windows* 2000 和 Windows* Server 2003。它是用於安裝 RAID 或 AHCI 驅動程式,如 Microsoft 文件《Windows NT 自動化安裝部署指南》中所概述。

若要進行 RAID 或 AHCI 驅動程式的全自動安裝,請執行下列步驟:

  1. 從安裝檔案解壓縮 IAAHCI.INF、IAAHCI.CAT、IASTOR.INF、IASTOR.CAT、IASTOR.SYS 和 TXTSETUP.OEM 檔案。

    若要解壓縮這些檔案,請使用下列指令行選項執行可執行檔 (例如,Intel® 組合儲存管理員的 IATA50_ENU.EXE): -A -A -PC:\<路徑>,如 README.TXT 的進階安裝說明一節中所述。

    系統是否含有 32 位元或 64 位元處理器?
    • 如果系統具有 32 位元處理器,檔案在解壓縮後會位於 Drivers 資料夾中。
    • 如果系統具有 64 位元處理器,檔案在解壓縮後會位於 Drivers64 資料夾中。

  2. 視適用狀況,將下列指令行插入 UNATTEND.TXT 檔案:

    RAID 模式設定的系統:

    注意︰ 使 用 Intel® 82801ER SATA RAID 控製器、Intel® 6300ESB SATA RAID 控製器、Intel® 82801FR SATA RAID 控製器或 Intel® 82801GR/GH SATA RAID 控製器、Intel® 82801GHM SATA RAID 控製器或 Intel® 631xESB/632xESB SATA RAID 控製器的系統,都可採用這個相同的程序。 只要替換引號中的文字即可。

    // 將下列指令行插入 UNATTEND.TXT 檔案中

    [MassStorageDrivers]
    "Intel® 82801R/DO/DH SATA RAID Controller" = OEM

    [OEMBootFiles]
    iaStor.inf
    iaStor.sys
    iaStor.cat
    Txtsetup.oem

    設定為 AHCI 模式的系統:

    注意︰ 使用 Intel® 82801FR SATA AHCI 控製器、Intel® 82801FBM SATA AHCI 控製器、Intel® 82801GR/GH SATA AHCI 控製器、Intel® 82801GBM SATA AHCI 控製器或 Intel® 631xESB/632xESB SATA AHCI 控製器的系統,都可採用這個相同的程序。 只要替換引號中的文字即可。

    // 將下列指令行插入 UNATTEND.TXT 檔案中

    [MassStorageDrivers]
    "Intel® 82801R/DO/DH SATA AHCI Controller" = OEM

    [OEMBootFiles]
    iaAhci.inf
    iaStor.sys
    iaAhci.cat
    Txtsetup.oem

  3. 將 IAAHCI.CAT、IAAHCI.INF、IASTOR.CAT、IASTOR.INF、IASTOR.SYS 和 TXTSETUP.OEM 放入下列資料夾中:

    :\i386\$OEM$\Textmode

作業系統:

Windows* 2000、Windows XP Professional、Windows* XP Home Edition、Windows Server* 2003

適用於:

星期日, 9月 10, 2006

Cisco rommon mode password recovery

CISCO路由器配置手冊
參考:
1、Cisco路由器口令恢復
當Cisco路由器的口令被錯誤修改或忘記時,可以按如下步驟進行操作:
1. 開機時按使進入ROM監控狀態
2. 按o 命令讀取配置寄存器的原始值
> o 一般值為0x2102
3. 作如下設置,使忽略NVRAM引導
>o/r0x**4* Cisco2500系列命令
rommon 1 >confreg 0x**4* Cisco2600、1600系列命令
一般正常值為0x2102
4. 重新啟動路由器
>I
rommon 2 >reset
5. 在「Setup」模式,對所有問題回答No
6. 進入特權模式
Router>enable
7. 下載NVRAM
Router>configure memory
8. 恢復原始配置寄存器值並啟動所有埠
「hostname」#configure terminal
「hostname」(config)#config-register 0x「value」
「hostname」(config)#interface xx
「hostname」(config)#no shutdown
9. 查詢並記錄丟失的口令
「hostname」#show configuration (show startup-config)
10. 修改口令
「hostname」#configure terminal
「hostname」(config)line console 0
「hostname」(config-line)#login
「hostname」(config-line)#password xxxxxxxxx
「hostname」(config-line)#
「hostname」(config-line)#write memory(copy running-config startup-config)


您要用終端機才會方便

我是以2611來做的,不過1600也適用。
以下是我的做法,您可以參考一下,有實做成功喔!
一:先準備一條使用RJ45接頭的連接線,連接Router的Console Port。
二:終端機使用以下設定(不要用Win NT,建議用Win9x)
9600 baud rate
No parity
8 data bits
1 stop bit
No flow control
三:將Router開機,
System Bootstrap, Version 11.3(19)AA, EARLY DEPLOYMENT RELEASE SOFTWARE (fc1)
Copyright (c) 1998 by cisco Systems, Inc.
C2600 processor with 32768 Kbytes of main memory
Main memory is configured to 32 bit mode with parity enabled
看到出現此訊息後,趕緊按下Ctrl+Break送出 Break,就會進到ROMMON Mode。
四:在rommon>下輸入confreg
rommon 1 > confreg
回答下列問題
Configuration Summary
enabled are:
load rom after netboot fails
console baud: 9600
boot: image specified by the boot system commands
or default to: cisco2-C2600

do you wish to change the configuration? y/n [n]: y-------回答Y
enable "diagnostic mode"? y/n [n]:
enable "use net in IP bcast address"? y/n [n]:
disable "load rom after netboot fails"? y/n [n]:
enable "use all zero broadcast"? y/n [n]:
enable "break/abort has effect"? y/n [n]:
enable "ignore system config info"? y/n [n]: y-------回答Y
change console baud rate? y/n [n]:
change the boot characteristics? y/n [n]: y-------回答Y
enter to boot:
0 = ROM Monitor
1 = the boot helper image
2-15 = boot system
[2]: 2-----要選2

五:rommon 2 > reset 重新啟動 Router
六:
--- System Configuration Dialog --- Router 會自動進入 Setup 選單

Would you like to enter the initial configuration dialog? [yes/no]:
按 Ctrl-C 中斷 Setup
七:Router 會進入一般模式
Router>enable 進入 Priviledged mode (不需輸入任何密碼)
Router#show startup-config 顯示 NVRAM 裡面的 Startup-Config
(顯示結果省略)
八:將 Startup-Config 載入 DRAM
Router#configure memory
九: 2611#configure terminal----輸入此命令進入設定模式
十:將 enable 的密碼改為 2611
2611(config)#enable secret 2611
十一:將 Running-Config 寫回 NVRAM
2611#copy running-config startup-config
十二:
2611#show version
Cisco Internetwork Operating System Software
(中間結果省略)
Configuration register is 0x2142 目前的 Configuration Register 為 0x2142
十三:
2611#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
2611(config)#config-reg 0x2102 將 Configuration Register 改回來
2611(config)#^Z -----Ctrl+Z
2611#
00:01:54: %SYS-5-CONFIG_I: Configured from console by console
2611# reload 重新開機
全部大功告成。過程雖然有點囉唆,不過卻是一定要做的。
希望上述做法對需要的人有幫助。



2500 Password Recovery

Step 1 : Power on router (Press [Ctrl] + [Break] within 30 sec)
Step 2 : o/r 0x2142 [Enter]
Step 3 : i [Enter]
Then it will show as follow
Step 4 : Would you like to enter initial configuration dialog? [Yes] : No [Enter]
Step 5 : Router> enable [Enter]
Step 6 : router# copy start run [Enter]
Step 7 : router# conf t [Enter]
Step 8 : router(config)# config-register 0x2102 [Enter]
Step 9 : router(config)# enable secret Newpassword [Enter]
Step 10 : router(config)# exit [Enter]
Step 11 : router# copy run start [Enter]
Step 12 : router# reload [Enter]

1600 (2600;3600)Password Recovery

Step 1 : Power on router (Press [Ctrl] + [Break] within 30 sec)
Then it will show as follow
Step 2 : rommon> confreg [Enter]
Step 3 : “Do you wish to change configuration[y/n]?” Type y
Step 4 : Type n to all of the question that appear until you reach the
“ignore system config info[y/n]” , type y
Step 5 : Type n to all of the question that appear until you reach the
“change boot characteristics[y/n]” , type y
Step 6 : “enter to boot “ type 2[Enter]
Step 7 : “Do you wish to chane configuration [y/n]” type n
Step 8 : reset [Enter]
Step 9 : Then do the same thing as Step 5 to 12 in 2500 password recovery



Multiple LUNs (scsi_mod) and RHEL4

Multiple LUNs (scsi_mod) and RHEL4



I'm trying to kickstart a machine, and partition disks on 3 fibre
attached LUNs (lpfc/emulex), but the installer only sees the first one
(/dev/sda), and not the other two (/dev/sdb, /dev/sdc)

Out of the box, the RHEL4 initrd is configured to only discover a
single scsi LUN. In order to fix this, you have to use a custom
initrd:

# echo "options scsi_mod max_luns=xx" >> /etc/modprobe.conf
# mkinitrd -f
# reboot
(where xx > 1)

This works fine *after* the box is installed, but is there a way to
make this work during the kickstart install? I tried using some %pre
scripts, but it didn't work:

---ks.cfg---
...
%pre
rmmod scsi_mod
modprobe scsi_mod max_luns=128
...
%post
echo "options scsi_mod max_luns=255" >> /etc/modprobe.conf
mkinitrd -f -v
---ks.cfg---


Howto: Linux see new fiber channel attached disk LUNs without rebooting

Posted in Linux

Q. How do I force fdisk to see new fiber channel attached disk LUNs without rebooting my Linux server or system?

A. Hot swapping or hot plugging is the ability to remove and replace components of a machine, usually a computer, while it is operating. Once the appropriate software is installed on the computer, a user can plug and unplug the component without rebooting.

You can add new SCSI device to a Linux system through SCSI hotplug mechanism.

Type the following command as root user:

$ echo "scsi add-single-device 1 2 3 4">/proc/scsi/scsi

Where,

  • 1 - HBA number
  • 2 - channel id on the HBA
  • 3 - SCSI ID of the new device
  • 4 - LUN of the new device

You need to replace 1,2,3,4 with actual values or parameters as per above list.



http://www.cyberciti.biz/faq/howto-linux-see-new-fiber-channel-attached-disk-luns-without-rebooting/

星期六, 9月 09, 2006

Howto: build Linux kernel module against installed kernel w/o full kernel source tree

Recently I received a question via email:

How do I build Linux kernel module against installed or running Linux kernel? Do I need to install new kernel source tree from kernel.org?

To be frank you do not need a new full source tree in order to just compile or build module against the running kernel i.e an exploded source tree is not required to build kernel driver or module. The instruction outlined below will benefit immensely to a developers/power users.

This is essential because if you just want to compile and install driver for new hardware such as Wireless card or SCSI device etc. With following method, you will save the time, as you are not going to compile entire Linux kernel.

Please note that to work with this hack you just need the Linux kernel headers and not the full kernel source tree. Install the linux-kernel-headers package which provides headers from the Linux kernel. These headers are used by the installed headers for GNU glibc and other system libraries as well as compiling modules. Use following command to install kernel headers:
# apt-get install kernel-headers-2.6.xx.xx.xx

Replace xx.xx with your actual running kernel version (e.g. 2.6.8.-2) and architecture name (e.g. 686/em64t/amd64). Use uname -r command to get actual kernel version name. Please note that above command will only install kernel headers and not the entire kernel source-code tree.

All you need to do is change Makefile to use current kernel build directory. You can obtain this directory name by typing following command:
$ ls -d /lib/modules/$(uname -r)/buildOutput:

/lib/modules/2.6.15.4/build

Let, say you have .c source code file called hello.c. Now create a Makefile as follows in the directory containing hello.c program / file:
$ vi Makefile
Append following text:
obj-m := hello.o
KDIR := /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)
default:
$(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules

Save and close the file. Type the following command to build the hello.ko module:
$ make

To load Linux kernel module type the command:
# modprobe hello

Updated for accuracy.


http://www.cyberciti.biz/tips/build-linux-kernel-module-against-installed-kernel-source-tree.html