"Linux Gazette...making Linux just a little more fun!"


The Answer Guy


By James T. Dennis, [email protected]
Starshine Technical Services, http://www.starshine.org/


Contents:


 Mounting Disks Under Red Hat 4.0

From: Bigby, Bruce W. [email protected]

Hi. The RedHat 4.0 control-panel has an interesting problem. I have two entries in my /etc/fstab file for my SCSI Zip Drive--one for mounting a Win95 Zip removable disk and another for mounting a removable Linux ext2fs disk--

/dev/sda4 /mnt/zip   ext2fs rw,noauto 0 0
/dev/sda4 /mnt/zip95 vfat   rw,noauto 0 0
I do this so that I can easily mount a removable zip disk by supplying only the appropriate mount point to the mount command--for example, by supplying
mount /mnt/zip
when I want to mount a Linux ext2fs disk, and
mount /mnt/zip95
when I want to mount a Windows 95 Zip disk.

 Yes, I do this all the time (except that I use the command line for all of this -- and vi to edit my fstab). I also add the "user" and a bunch of "nosuid,nodev,..." parameters to my options field. This allows me or my wife (the only two users with console access to the machine) to mount a new magneto optical, floppy, or CD without having to 'su').

 Unfortunately, the control-panel's mount utility treats the two lines as duplicates and removes the additional lines that begin with /dev/sda4. Consequently, the control panel's mount utility only sees the first line,

/dev/sda4 /mnt/zip   ext2fs rw,noauto 0 0
In addition, the utility also modifies my original /etc/fstab. I do not

 Bummer! Since I don't use the GUI controls I never noticed that.

 desire this behavior. I prefer that the utility be fairly dumb and not modify my original /etc/fstab. Has RedHat fixed this problem in 4.2?

 I don't know. There are certainly enough other fixes and upgrades to be worth installing it (although -- with a .1 version coming out every other month -- maybe you want to just download selective fixes and wait for the big 5.0).

(My current guess -- totally unsubstantiated by even an inside rumor -- is that they'll shoot for integrating glibc -- the GNU C library -- into their next release. That would be a big enough job to warrant a jump in release numbers).

 Can I obtain the sources and modify the control-panel's mount utility so that it does not remove, "so-called," duplicates?

 Last I heard the control-panel was all written in Python (I think they converted all the TCL to Python by 4.0) In any event I pretty sure that it's TCL, Python and Tk (with maybe some bash for some parts). So you already have the sources.

The really important question here is why you aren't asking the support team at RedHat (or at least posting to their "bugs@" address). This 'control-panel' is certainly specific to Red Hat's package.

According to the bash man page, bash is supposed to source the .profile, or .profile_bash, in my home directory. However, when I login, bash does not source my .profile. How can I ensure that bash sources the .profile of my login account--$HOME/.profile?

 The man page and the particular configuration (compilation) options in your binary might not match.

You might have an (empty?) ~/.bash_profile or ~/.bash_login (the man page looks for these in that order -- with .profile being the last -- and only it sources the first of them that it finds).

You might have something weird in your /etc/profile or /etc/bashrc that's preventing your ~/.bash_* or ~/.profile from being sourced.

Finally you might want to double check that you really are running bash as your login shell. There could be all sorts of weird bugs in your configuration that effectively start bash and fail to signal to it that this is a "login" shell.

Normally login exec()'s bash with an "ARG[0]" of "-bash" (preceding the name with a dash). I won't get into the gory details -- but if you were logging in with something that failed to do this: bash wouldn't "know" that it was a login shell -- and would behave as though it were a "secondary" shell (like you invoked it from your editor)).

If all else fails go over to prep.ai.mit.edu and grab the latest version of the GNU bash sources. Compile them yourself.

-- Jim


 Weird LILO Problem

From: David Runnels [email protected]

Hi Jim. I read your column in the Linux Gazette and I have a question. (If I should have submitted it some other way I apologize.)

 I recommend using the [email protected] address for now. At some point I hope to have SSC set up a [email protected] address -- or maybe get linux.org to give me an account and set up some custom mail scripts.

 I've been using Linux casually for the last couple of years and several months ago I installed RedHat 4.0 on the second IDE drive of a Win95 system. Though I've used System Commander in the past I don't like using it with Win95 so I had the RedHat install process create a boot floppy. This has always worked fine, and I made a second backup floppy using dd) which I also made sure booted fine.

 This probably isn't really a "boot" floppy. It sounds like a "lilo" floppy to me. The difference is that a boot floppy has a kernel on it -- a "lilo" floppy just has the loader on it.

The confusing thing about Linux is that it can be booted in so many ways. In a "normal" configuration you have Lilo as the master boot program (on the first hard drive -- in the first sector of track 0 -- with the partition table). Another common configuration places lilo in the "superblock" (logical boot record) of the Linux "root" partition (allowing the DOS boot block, or the OS/2 or NT boot manager -- or some third party package like System Commander) to process the partition table and select the "active" partition -- which *might* be the Linux root partition.

Less common ways of loading Linux: use LOADLIN.EXE (or SYSLINUX.EXE) -- which are DOS programs that can load a Linux kernel (kicking DOS out from under them so to speak), put Lilo on a floppy (which is otherwise blank) -- or on a none Linux boot block (which sounds like your situation).

Two others: You can put Lilo on a floppy *with* a Linux kernel -- or you can even write a Linux kernel to a floppy with no lilo. That last option is rarely used.

The point of confusion is this: LILO loads the Linux kernel using BIOS calls. It offers one the opportunity to pass parameters to the kernel (compiled into it's boot image via the "append" directive in /etc/lilo.conf -- or entered manually at boot time at the lilo prompt).

Another source of confusion is the concept that LILO is a block of code and data that's written to a point that's outside the filesystems on a drive -- /sbin/lilo is a program that writes this block of boot code according to a set of directives in the /etc/lilo.conf. It's best to think of the program /sbin/lilo as a "compiler" that "compiles" a set of boot images according to the lilo.conf and writes them to some place outside of your filesystem.

Yet another source of confusion is that the Linux kernel has a number of default parameters compiled into it. These can be changed using the 'rdev' command (which was originally used to set the "root device" flags in a kernel image file). 'rdev' basically patches values into a file. It can be be used to set the "root device," the "initial video mode" and a number of other things. Some of these settings can be over-ridden via the LILO prompt and append lines. LOADLIN.EXE can also pass parameters to the kernel that it loads.

There's a big difference between using a kernel image written directly on a floppy -- and a LILO that's built to load an image that's located on a floppy filesystem (probably minix or ext2fs). With LILO the kernel must be located on some device that is accessible with straight BIOS calls.

This usually prevents one from using LILO to boot off of a third IDE or SCSI disk drive (since most systems require a software driver to allow DOS or other OS' to "see" these devices). I say "usually" because there are some BIOS' and especially some BIOS extensions on some SCSI and EIDE controllers that may allow LILO to access devices other than the first two floppies and the first two hard drives. However, those are rare. Most PC hardware can only "see" two floppy drives and two hard drives -- which must be on the same controller -- until an OS loads some sort of drivers.

In the case where a kernel is directly located on the raw floppy -- and in the case where the kernel is located on the floppy with LILO -- the kernel has the driver code for your root device (and controllers) built in. (There are also complex new options using 'initrd' -- an "initial RAM disk" which allows a modular kernel to load the drivers for it's root devices.

Yet another thing that's confusing to the DOS user -- and most transplants from other forms of Unix -- is that the kernel doesn't have to be located on the root device. In fact LOADLIN.EXE requires that the kernel be located on a DOS filesystem.

To make matters more complicated you can have multiple kernels on any filesystem, any of them might use any filesystem as their root device and these relationships (between kernel and root device/filesystem can be set in several ways -- i.e. by 'rdev' or at compile time, vs. via the LOADLIN or LILO command lines).

I recommend that serious Linux users reserve a small (20 or 30 Mb) partition with just a minimal installation of the root/base Linux software on it. This should be on a separate device from your main Linux filesystems.

Using this you have an alternative (hard drive based) boot method which is much faster and more convenient than digging out the installation boot/root floppies (or having to go to a working machine and build a new set!). I recommend the same thing for most Solaris and FreeBSD installations. If you have a DOS filesystem on the box -- at least stash a copy of LOADLIN.EXE and a few copies of your favorite kernels in C:\LINUX\ (or wherever).

Now that more PC SCSI cards support booting off of CD-ROM's (a feature that's been long overdue!) you can get by without heeding my advice -- IF YOU HAVE SUCH A CONTROLLER AND A CD TO MATCH.

(Incidentally -- I found out quite by accident that the Red Hat 4.1 CD is "bootable" on Adaptec 2940 controllers -- if you have the Adaptec configured to allow it. I've also heard that the NCR SymBIOS PCI controller supports this -- though I haven't tested that yet).

In any event we should all make "rescue disks" -- unfortunately these are trickier than they should be. Look for the Bootdisk HOWTO for real details about this.

 About a week ago I put the Linux floppy in the diskette drive, reset the machine and waited for the LILO prompt. Everything went fine, but all I got were the letters LI and everything stopped. I have tried several times, using the original and the backup diskette, with the same results.

 Did you add a new drive to the system?

 I have done nothing (that I can think of!) to my machine and I'm at a loss as to what might be causing this. Just to ensure that the floppy drive wasn't acting funny, I've booted DOS from it and that went fine.

 When you booted DOS where you able to see the drive? I'd get out your installation floppy (or floppies -- I don't remember whether Red Hat 4.0 had a single floppy system or not -- 4.1 and 4.2 only require one for most hardware). Boot from that and choose "rescue" or switch out of the installation script to a shell prompt. You should then be able to attempt mounting your root filesystem.

If that fails you can try to 'fsck' it. After that it's probably a matter of reinstallation and restoring from backups.

 Any ideas you have would be appreciated. Thanks for your time.

Dave Runnels

 Glad I could help.


 Running FileRunner

David E. Stern [email protected] I wanted to let you know that you were right about relying too heavily on rpm. In the distant past, I used file text-based file compression utilities, so I tried it again and tarballs are actually quite nice. I also found that rpm --nodeps will help. Tarballs are also nice because not all apps are distributed with rpm. (bonus! :-) I'm also told that multiple versions of tcl/tlk can peacably coexist, although rpm won't allow it by default. Another ploy with rpm which I didn't see documented was that to avoid circular dependencies, update multiple rpms at the same time; i.e.: rpm -Uvh app1.rpm app2.rpm app3.rpm . Another thing I learned about was that there are some non-standard (contributed) libraries that are required for certain apps, like afio and xpm. Thanks for the great ideas and encouragement.

The end goal: to install FileRunner, I simply MUST have it! My intermediate goal is to install Tcl/Tk 7.6/4.2, because FileRunner needs these to install, and I only have 7.5/4.1 . However, when I try to upgrade tcl/tlk, other apps rely on older tcl/tk libraries, at least that's what the messages allude to:

libtcl7.5.so is needed by some-app
libtk4.1.so is needed by some-app

(where some-app is python, expect, blt, ical, tclx, tix, tk, tkstep,...)

I have enough experience to know that apps may break if I upgrade the libraries they depend on. I've tried updating some of those other apps, but I run into further and circular dependencies--like a cat chasing it's tail.

In your opinion, what is the preferred method of handling this scenario? I must have FileRunner, but not at the expense of other apps.

 It sounds like you're relying too heavily on RPM's. If you can't afford to risk breaking your current stuff, and you "must" have the upgrade you'll have to do some stuff beyond what the RPM system seems to do.

One method would be to grab the sources (SRPM or tarball) and manually compile the new TCL and tk into /usr/local (possibly with some changes to their library default paths, etc). Now you'll probably need to grab the FileRunner sources and compile that to force it to use the /usr/local/wish or /usr/local/tclsh (which, in turn, will use the /usr/local/lib/tk if you've compiled it all right).

Another approach is to set up a separate environment (separate disk, a large subtree of an existing disk -- into which you chroot, or a separate system entirely) and test the upgrade path where it won't inconvenience you by failing. A similar approach is to do a backup, test your upgrade plan -- (if the upgrade fails, restore the backup).

 Thanks, -david

 You're welcome. This is a big problem in all computing environments (and far worse in DOS, Windows, and NT systems than in most multi-user operating systems. At least with Unix you have the option of installing a "playpen" (accessing it with the chroot call -- or by completely rebooting on another partition if you like).

Complex interdepencies are unavoidable unless you require that every application be statically linked and completely self-sufficient (without even allowing their configuration files to be separate. So this will remain an aspect of system administration where experience and creativity are called for (and a good backup may be the only thing between you and major inconvenience).

-- Jim


 Adding Linux t a DEC XLT-366

From: Alex Pikus [email protected]

I have a DEC XLT-366 with NTS4.0 and I would like to add Linux to it. I have been running Linux on an i386 for a while.

I have created 3 floppies:

I have upgrade AlphaBIOS to v5.24 (latest from DEC) and added a Linux boot option that points to a:\

 You have me at a severe disadvantage. I've never run Linux on an Alpha. So I'll have to try answering this blind.

 When I load MILO I get the "MILO>" prompt without any problem. When I do

show
or
boot ...
at the MILO I get the following result ...

SCSI controller gets identified as NCR810 on IRQ 28 ... test1 runs and gets stuck "due to a lost interrupt" and the system hangs ...

In WinNTS4.0 the NCR810 appears on IRQ 29.

 My first instinct is the ask if the autoprobe code in Linux (Alpha) is broken. Can you use a set of command-line (MILO) parameters to tell pass information about your SCSI controller to your kernel? You could also see about getting someone else with an Alpha based system to compile a kernel for you -- and make sure that it has values in it's scsi.h file that are appropriate to your system -- as well as insuring that the corrective drivers are built in.

 How can make further progress here?

 It's a tough question. Another thing I'd look at is to see if the Alpha system allows booting from a CD-ROM. Then I'd check out Red Hat's (or Craftworks') Linux for Alpha CD's -- asking each of them if they support this sort of boot.

(I happened to discover that the Red Hat Linux 4.1 (Intel) CD-ROM was bootable when I was working with one system that had an Adaptec 2940 controller where that was set as an option. This feature is also quite common on other Unix platforms such as SPARC and PA-RISC systems -- so it is a rather late addition to the PC world).

 Thanks!
Alex.


 Disk Support

From: Andrew Ng lulu@@asiaonline.net

Dear Sir, I have a question to ask: Does Linux support disks with density 2048bytes/sector?

 Apparently not. This is a common size for CD-ROM's -- but it not at all normal for any other media.

 I have bought a Fujitsu MO drive which support up to 640MB MO disks with density 2048bytes/sector. The Slackware Linux system does not support access to disks with this density. Windows 95 and NT support this density and work very well. Is there any version of Linux which support 2048bytes/sector? If not, is there any project working on that?

 I believe the drive ships with drivers for DOS, Windows, Windows '95 and NT. The OS' don't "support it" the manufacturer supports these OS'.

Linux, other the other hand, does support most hardware (without drivers being supplied by the hardware manufacturers). Granted we get some co-operation from many manufacturers. Some even contribute code to the main kernel development.

We prefer the model where the hardware manufacturer releases free code to drive their hardware -- whether that code is written for Linux, FreeBSD or any other OS. Release it once and all OS' can port and benefit by it.

 I hear a lot of praise about Linux. Is Linux superior to Windows NT in all aspect?

 That's controversial question. Any statement like: Is "foo" superior to "bar" in all aspects? ... is bound to cause endless (and probably acrimonious) debate.

Currently NT has a couple of advantages: Microsoft is a large company with lots of money to spend on marketing and packaging. They are very aggressive in making "partnerships" and building "strategic relationships" with the management of large companies.

Microsoft has slowly risen to dominance in the core applications markets (word processors, spreadsheets, and databases). Many industry "insiders" (myself included) view this as being the result of "trust"-worthy business practices (a.k.a. "verging on monopolistic").

In other words may people believe that MS Word isn't the dominant word processor because it is technically the superior product -- but because MS was able to supply the OS features they needed when they wanted (and perhaps able to slip the schedules of certain releases during the critical development phases of their competitors).

The fact that the OS, and the principal programming tools, and the major applications are all from the same source has generated a amazing amount of market antagonism towards Microsoft. (Personally I think it's a bit extreme -- but I can understand how many people feel "trapped" and understand the frustration of thinking that there's "no choice").

Linux doesn't have a single dominant applications suite. There are several packages out there -- Applixware, StarOffice, Caldera's Internet Office Suite. Hopefully Corel's Java Office will also be a useful to Linux, FreeBSD and other users (including Windows and NT).

In addition to these "suites" there are also several individual applications like Wingz (a spreadsheet system), Mathematica, (the premier symbolic mathematics package), LyX (the free word processor -- LaTeX front-end -- that's under development), Empress, /rdb (database systems), Flagship and dbMan IV (xBase database development packages), Postgres '95, mSQL, InfoFlex, Just Logic's SQL, MySQL (database servers) and a many more. (Browse through the Linux Journal _Buyer's_Guide_ for a large list -- also waltz around the web a bit).

Microsoft's SQL Server for NT is getting to be pretty good. Also, there are alot of people who program for it -- more than you'll find for InfoFlex, Postgres '95 etc. A major problem with SQL is that the servers are all different enough to call for significant differences in the front end applications -- which translates to lots of programmer time (and money!) if you switch from one to another. MS has been very successful getting companies to adopt NT Servers for their "small" SQL projects (which has been hurting the big three -- Oracle, Sybase and Informix). Unfortunately for Linux -- database programmers and administrators are very conservative -- they are a "hard sell."

So Linux -- despite the excellent stability and performance -- is not likely to make a significant impact as a database server for a couple of years at least. Oracle, Sybase and Informix have "strategic relationships" with SCO, Sun, and other Unix companies.

The established Unix companies viewed Linux as a threat until recently. They now seem to see it as a mixed blessing. On the up side Linux has just about doubled the number of systems running Unix-like OS', attracted somewhere between two and eight million new converts away from the "Wintel" paradigm, and even wedged a little bit of "choice" into the minds of the industry media. On the down side SCO can no longer charge thousands of dollars for the low end of their systems. This doesn't really affect Sun, DEC, and HP so much -- since they are primarily hardware vendors who only got into the OS business to keep their iron moving out the door. SCO and BSDI have the tough fight since the bulk of their business is OS sales.

(Note: BSDI is *not* to be confused with the FreeBSD, NetBSD, OpenBSD, or 386BSD (Jolix) packages. They are a company that produces a commercial Unix, BSDI/OS. The whole Free|Net|Open-BSD set of programming projects evolved out of the work of Mr. and Mrs. Jolitz -- which was called 386BSD -- and I call "Jolix" -- a name with I also spotted in the _Using_C-Kermit_ book from Digital Press).

So there don't seem to be any Oracle, SyBase, or Informix servers available for Linux. The small guys like JustLogic and InfoFlex have an opportunity here -- but it's a small crack in a heavy door and some of them are likely to get their toes broken in the process.

Meanwhile NT will keep getting market share -- because their entry level still a tiny fraction of the price of any of the "big guys."

I've just barely scratched the tip of the iceberg (to thoroughly blend those metaphors). There are so many other aspects of comparison it's hard to even list them -- let alone talk about who Linux and NT measure up to them.

It's also important to realize that it's not just NT vs. Linux. There are many forms of Unix -- most of them are quite similar to Linux from a user and even from an administrators point of view. There are many operating systems that are vastly different than either NT (which is supposed to be fundamentally based on VMS) and the various Unix variants.

There are things like Sprite (a Berkeley research project), Amoeba and Chorus (distributed network operating systems), EROS, and many others.

Here's a link where you can find out more about operating systems in general: Yahoo! Computers and Internet: Operating Systems: Research

-- Jim


 Legibility

From: Robert E Glacken [email protected]

I use a 256 shade monochrome monitor. The QUESTIONS are invisible.

 What questions? What OS? What GUI? (I presume that the normal text is visible in text mode so you must be using a GUI of some sort)?

I wouldn't expect much from a monochrome monitor set to show 256 (or even 127) shades of grey. That's almost no one in the PC/Linux world that uses those -- so there almost no one that tunes their color tables and applications to support it.

Suggestions -- get a color screen -- or drop the GUI and use text mode.

-- Jim


 MetroX Problems

From: Allen Atamer [email protected]

I am having trouble setting up my XServer. Whether or not I use MetroX or Xfree86 to set it up it's still not working.

When I originally chose metrox to install, i got to the setup screen, chose my card and resolution, saved and exited. Then i started up the xwindows, and my screen loaded the Xserver, but the graphics were all messed up. I exited, then changed some settings, and now i can't even load the xserver. The Xerrors file says it had problems loading the 'core'.

 Hmm. You don't mention what sort of video card you're using or what was "messed up." As I've said many times in my column -- I'm not must of an "Xpert" (or much of a "TeXpert" for that matter).

MetroX and XFree86 each have their own support pages on the web -- and there are several X specific newsgroups where you'd find people who are much better with X than I.

Before you go there to post I'd suggest that you type up the type of video card and monitor you have in excruciating detail -- and make sure you go through the X HOWTO's and the Red Hat manual. Also be sure to check the errata page at Red Hat (http://www.redhat.com/errata.html) -- this will let you know about any problems that were discovered after the release of 4.1.

One other thing you might try is getting the new version (4.2 -- Biltmore) -- and check it's errata sheet. You can buy a new set of CD's (http://www.cheapbytes.com is one inexpensive source) or you can use up a bunch of bandwidth by downloading it all. The middle road is to to download just the parts you need.

I notice (looking at the errata sheets as I type this) that XFree86 is up to version 3.3.1 (at least). This upgrade is apparently primarily to fix some buffer overflow (security) problems in the X libraries.

 By the way, how do I mount what's on the second cd and read it? (vanderbilt 4.1)

 First umount the first CD with a command like: umount /cdrom Remove it. Then 'mount' the other one with a command like: mount -t iso9660 -o ro /cdrom /dev/scd0 ... where /cdrom is some (arbitrary but extent) mount point and /dev/scd0 is the device node that points to your CD drive (that would be the first SCSI CD-ROM on your system -- IDE and various other CD's have different device names).

To find out the device name for your CD use the mount command BEFORE you unmount the other CD. It will show each mounted device and the current mount point.

Personally I use /mnt/cd as my mount point for most CD's. I recommend adding an entry to your /etc/fstab file (the "filesystems table" for Unix/Linux) that looks something like this:

# /etc/fstab
/dev/scd0      /mnt/cd            iso9660 noauto,ro,user,nodev,nosuid 0 0

This will allow you to use the mount and umount commands as a normal user (without the need to su to 'root').

I also recommend changing the permissions of the mount command to something like:

-rwsr-x---   1 root     console		26116 Jun  3  1996 /bin/mount
(chgrp console `which mount && chmod 4550 `which mount`)

... so that only members of the group "console" can use the mount command. Then add your normal user account to that group.

The idea of all this is to strike a balance between the convenience and reduced "fumblefingers" exposure of running the privileged command as a normal user -- and the potential for (as yet undiscovered buffer overflows) to compromise the system by "guest" users.

(I recommend similar procedures for ALL SUID binaries -- but this is an advanced issue that goes *WAY* beyond the scope of this question).

Allen, You really need to get a copy of the "Getting Started" guide from the Linux Documentation Project. This can be downloaded and printed (there's probably a copy on your CD's) or you can buy the professionally bound editions from any of several publishers -- my favorite being O'Reilly & Associates (http://www.ora.com).

Remember that the Linux Gazette "Answer Guy" is no substitute for reading the manuals and participating in Linux newsgroups and mailing lists.

-- Jim


 Installing Linux

From: Aryeh Goretsky [email protected]

 [ Aryeh, I'm copying my Linux Gazette editor on this since I've put in enough explanation to be worth publishing it ]

 ..... why ... don't they just call it a disk boot sector . .... Okay, I've just got to figure out what the problem is, then. Are there any utilities like NDD for Linux I can run that will point out any errors I made when entering the superblock info?

 Nothing with a simple, colorful interface. 'fsck' is at least as good with ext2 filesystems as NDD is with FAT (MS-DOS) partitions. However 'fsck' (or, more specifically, e2fsck) has a major advantage since the ext2fs was designed to be robust. The FAT filesystem was designed to be simple enough that the driver code and the rest of the OS could fit on a 48K (yes, forty-eight kilobytes) PC (not XT, not AT, and not even close to a 386). So, I'm not knocking NDD when I say that fsck works "at least" as well.

However, fsck doesn't touch your MBR -- it will check your superblock and recommand a command to restore the superblock from one of the backups if yours is damaged. Normally the newfs (like MS-DOS' FORMAT) or mke2fs (basically the same thing) will scatter extra copies of the superblock every 8K sectors across the filesystem (or so). So there are usually plenty of backups.

So, usually, you'd just run fdisk to check your partitions and /sbin/lilo to write a new MBR (or other boot sector). /sbin/lilo will also update its own "map" file -- and may (optionally) make a backup of your original boot sector or MBR.

(Note: There was an amusing incident on one of the mailing lists or newsgroups -- in which a user complained that Red Hat had "infected his system with a virus." It turns out that lilo had moved the existing (PC/MBR) virus from his MBR to a backup file -- where it was finally discovered. So, lilo had actually *cured* his system of the virus).

Actually when you run /sbin/lilo you're "compiling" the information in the /etc/lilo.conf file and writing that to the "boot" location -- which you specify in the .conf file.

You can actually call your lilo.conf anything you like -- and you can put it anywhere you like -- you'd just have to call /sbin/lilo with a -C switch and a path/file name. /etc/lilo.conf is just the built-in default which the -C option over-rides.

Here's a copy of my lilo.conf (which I don't actually use -- since I use LOADLIN.EXE on this system). As with many (most?) Unix configuration files the comments start with hash (#) signs.

boot=/dev/hda
# write the resulting boot block to my first IDE hard drive's MBR.
# if this was /dev/hdb4 (for example) /sbin/lilo would write the 
# resulting block to the logical boot record on the fourth partition
# of my second IDE hard drive.   /dev/sdc would mean to write it to
# the MBR of the third SCSI disk.
# /sbin/lilo will print a warning if the boot location is likely to 
# be inaccessible to most BIOS' (i.e. would require a software driver
# for DOS to access it).

## NOTE:  Throughout this discussion I use /sbin/lilo to refer to the 
## Linux executable binary program and LILO to refer to the resulting
## boot code that's "compiled" and written by /sbin/lilo to whatever
## boot sector your lilo.conf calls for.  I hope this will minimize the
## confusion -- though I've liberally re-iterated this with parenthetical
## comments as well.

# The common case is to put boot=/dev/fd0H1440 to specify that the
# resulting boot code should be written to a floppy in the 1.44Mb
# "A:" drive when /sbin/lilo is run.  Naturally this would require
# that you use this diskette to boot any of the images and "other"
# stanzas listed in the rest of this file.  Note that the floppy
# could be completely blank -- no kernel or files are copied to it
# -- just the boot sector!


map=/boot/map
	# This is where /sbin/lilo will store a copy of the map file --
	# which contains the cylinder/sector/side address of the images
	# and message files  (see below)
	# It's important to re-run /sbin/lilo to regenerate the map
	# file any time you've done anything that might move any of 
	# these image or message files (like defragging the disk,
	# restoring any of these images from a backup -- that sort
	# of thing!).


install=/boot/boot.b
	# This file contains code for LILO (the boot loader) -- this is 
	# an optional directive -- and necessary in this case since it 
	# simply specifies the default location.
	
prompt
	# This instructs the LILO boot code to prompt the user for 
	# input.  Without this directive  LILO would just wait
	# upto "delay" time (default 0 tenths of a second -- none)
	# and boot using the default stanza.
	# if you leave this and the "timeout" directives out --
	# but you put in a delay=X directive -- then LILO won't 
	# prompt the user -- but will wait for X tenths of a second
	# (600 is 10 seconds).  During that delay the user can hit a 
	# shift key, or any of the NumLock, Scroll Lock type keys to 
	# request a LILO prompt.

timeout=50
	# This sets the amount of time LILO (the boot code) will 
	# wait at the prompt before proceeding to the default
	# 0 means 'wait forever'

message=/etc/lilo.message
	# this directive tells /sbin/lilo (the conf. "compiler") to 
	# include the contents of this message in the prompt which LILO
	# (the boot code) displays at boot time.  It is a handy place to
	# put some site specific help/reminder messages about what
	# you call your kernels and where you put your alternative bootable
	# partitions and what you're going to do to people who reboot your 
	# Linux server without a damn good reason.

other=/dev/hda1
	label=dos
	table=/dev/hda
	# This is a "stanza"
	# the keyword "other" means that this is referring to a non-Linux
	# OS -- the location tells LILO (boot code) where to find the 
	# "other" OS' boot code (in the first partition of the first IDE --
	# that's a DOS limitation rather than a Linux constraint).
	# The label directive is an arbitrary but unique name for this stanza
	# to allow one to select this as a boot option from the LILO 
	# (boot code) prompt.

	# Because it is the first stanza it is the the default OS --
	# LILO will boot this partition if it reaches timeout or is 
	# told not to prompt.  You could also over-ride that using a 
	# default=$labelname$ directive up in the "global" section of the
	# file.

image=/vmlinuz
	label=linux
	root=/dev/sda5
	read-only
	# This is my "normal" boot partition and kernel.
	# the "root" directive is a parameter that is passed to the 
	# kernel as it loads -- to tell the kernel where its root filesystem
	# is located.  The "read-only" is a message to the kernel to initially
	# mount the root filesystem read-only -- so the rc (AUTOEXEC.BAT) 
	# scripts can fsck (do filesystem checks -- like CHKDSK) on it.  
	# Those rc scripts will then normally remount the fs in "read/write" 
	# mode.

image=/vmlinuz.old
	label=old
	root=/dev/sda5
	append= single
	read-only
	# This example is the same except that it loads a different kernel
	# (presumably and older one -- duh!).  The append= directive allows
	# me to pass arbitrary directives on to the kernel -- I could use this
	# to tell the kernel where to find my Ethernet card in I/O, IRQ, and 
	# DMA space -- here I'm using it to tell the kernel that I want to come
	# up in "single-user" (fix a problem, don't start all those networking
	# gizmos) mode.

image=/mnt/tmp/vmlinuz
	label=alt
	root=/dev/sdb1
	read-only

	# This last example is the most confusing.  My image is on some other
	# filesystem (at the time that I run /sbin/lilo to "compile" this 
	# stanza). The root fs is on the first partition of the 2nd SCSI drive.
	# It is likely that /dev/sdb1 would be the filesystem mounted under 
	# /mnt/tmp when I would run /sbin/lilo.  However it's not "required"
	# My kernel image file could be on any filesystem that was mounted
	# /sbin/lilo will warn me if the image is likely to be inaccessible
	# by the BIOS -- it's can't say for sure since there are a lot of 
	# BIOS' out there -- some of the newer SCSI BIOS' will boot off of a 
	# CD-ROM!

I hope that helps. The lilo.conf man page (in section 5) gives *lots* more options -- like the one I just saw while writing this that allows you to have a password for each of your images -- or for the whole set. Also there are a number of kernel options described in the BootPrompt-HOWTO. One of the intriguing ones is panic= -- which allows you to tell the Linux kernel how long to sit there displaying a kernel panic. The default is "forever" -- but you can use the append= line in your lilo.conf to pass a panic= parameter to your kernel -- telling it how many seconds to wait before attempting to reboot.

In the years that I've used Linux I've only seen a couple (like two or three) kernel panics (that could be identified as such). Perhaps a dozen times I've had a Linux system freeze or go comatose enough that I hard reset it. (Most of those involve very bad hardware IRQ conflicts). Once I've even tricked my kernel into scribbling garbage all over one of my filesystems (don't play with linear and membase in your XConfig file -- and, in particular don't specify a video memory base address that's inside of your system's RAM address space).

So I'm not sure if setting a panic= switch would help much. I'd be much more inclined to get a hardware watchdog timer card and enable the existing support for that in the kernel. Linux is the only PC OS that I know of that comes with this support "built-in"

For those that aren't familiar with them a watchdog timer card is a card (typically taking an ISA slot) that implements a simple count-down and reset (strobing the reset line on the system bus) feature. This is activated by a driver (which could be a DOS device driver, a Netware Loadable Module, or a little chunk of code in the Linux kernel. Once started the card must be updated periodically (the period is set as part of the activation/update). So -- if the software hangs -- the card *will* strobe the reset line.

(Note: this isn't completely fool-proof. Some hardware states might require a complete power cycle and some sorts of critical server failures will render the systems services unavailable without killing the timer driver software. However it is a damn sight better than just hanging).

These cards cost about $100 (U.S.) -- which is a pity since there's only about $5 worth of hardware there. I think most Sun workstations have this feature designed into the motherboard -- which is what PC manufacturers should scramble to do.


 AG

At 11:43 AM 6/10/97 -0700, you wrote: Subject: Once again, I try to install Linux... ...and fail miserably. This is getting depressing. Someone wanna explain this whole superblock concept to me? Use small words....

 Aryeh, Remember master boot records (MBR's)? Remember "logical" boot records -- for volume boot records?

A superblock is the Unix term for a logical boot record. Linux uses normal partitions that are compatible with the DOS, OS/2, NT (et al) hard disk partitioning scheme.

To boot Linux you can use LILO (the Linux loader) which can be written to your MBR (most common), to your "superblock" or to the "superblock" of a floppy. This little chunk of code contains a reference (or "map") to the device and logical sector of one or more Linux kernels or DOS (or OS/2) bootable partitions.

There is a program called "lilo" which "compiles" a lilo.conf (configuration file) into this LILO "boot block" and puts it onto the MBR, superblock or floppy boot block for you. This is the source of most of the confusion about LILO. I can create a boot floppy with nothing put this boot block on it -- no kernel, no filesystems, nothing. LILO doesn't care where I put any of my linux kernels -- so long as it can get to it using BIOS calls (which usually limits you to putting the kernel on the one of the first two drives connected to the first drive controller on your system).

Another approach is to use LOADLIN.EXE -- this is a DOS program that loads a Linux (or FreeBSD) kernel. The advantage of this is that you can have as many kernel files as you like, and they can be located on any DOS accessible device (even if you had to load various weird device drivers to be able to see that device.

LOADLIN.EXE is used by some CD-ROM based installation packages -- avoiding the necessity of using a boot floppy.

The disadvantages of LOADLIN include the fact that you may have loaded some device drivers and memory managers that have re-mapped (hooked into) critical BIOS interrupt vectors. LOADLIN often needs a "boot time hardware vector table" (which it usually writes as C:\REALBIOS.INT -- a small hidden/system file). Creating this file involves booting from a "stub" floppy (which saves the table) and rebooting/restarting the LOADLIN configuration to tell it to copy the table from the floppy to your HD. This must be done whenever you change video cards, add any controller with a BIOS extension (a ROM) or otherwise play with the innards of your machine.

Call me and we can go over your configuration to narrow down the discussion. If you like you can point your web browser at www.ssc.com/lg and look for articles by "The Answer Guy" there. I've described this a greater length in some of my articles there.

-- Jim


 Adding Programs to the Pull Down Menus

From: Ronald B. Simon [email protected]

Thank you for responding to my request. By the way I am using RedHat release 4 and I think TheNextLevel window manager. I did find a .fvwm2rc.programs tucked away in...

 Ronald, TheNextLevel is an fvwm derivative.

 /etc/X11/TheNextLevel/. I added a define ProgramCM(Title,,,program name) and under the start/applications menu I saw Title. When I put the cursor over it and pressed the mouse button, everything froze. I came to the conclusion that I am in way over my head and that I probably need to open a window within the program that I am trying to execute. Any way I will search for some 'C' code that shows me how to do that. Thanks again!

 I forgot to mention that any non X program should be run through an xterm. This is normally done with a line in your rc file like: Exec "Your Shell App" exec xterm -e /path/to/your/app & ... (I'm using fvwm syntax here -- I'll trust you to translate to TNL format). Try that -- it should fix you right up.

Also -- when you think your X session is locked up -- try the Ctrl-Alt-Fx key (where Fx is the function key that corresponds to one of your virtual consoles). This should switch you out of GUI mode and into your normal console environment. You might also try Alt-SysReq (Print-Screen on most keyboards) followed by a digit from the alphanumeric portion of you keyboard (i.e. NOT from the numeric keypad). This is an alternative binding for VC switching that might be enabled on a few systems. If all of that fails you can try Ctrl-Alt-Backspace. This should (normally) signal the X server to shutdown.

Mostly I doubt that your server actually hung. I suspect that you confused it a bit by running a non-X program not "backgrounded" (you DO need those trailing ampersands) and failing to supply it with communications channel back to X (an xterm).

Please remember that my knowlege of X is very weak. I hardly ever use and almost never administer/customize it. So you'll want to look at the L.U.S.T. mailing list, or the comp.windows.x or (maybe) the comp.os.linux.x (although there is nothing to these questions which is Linux specific). I looked extensively for information about TheNextLevel on the web (in Yahoo! and Alta Vista). Unfortunately the one page that almost all of the references pointed to was down

The FVWM home page is at: http://www3.hmc.edu/~tkelly/docs/proj/fvwm.html

-- Jim


 Linux Skip

From: Jesse Montrose [email protected]

 Time warp: This message was lost in my drafts folder while I was looking up some of the information. As it turns out the wait was to our advantage. Read on.

 Date: Sun, 16 Mar 1997 13:54:34 -0800

Greetings, this question is intended for the Answer Guy associated with the Linux Gazette..

I've recently discovered and enjoyed your column in the Linux Gazette, I'm hoping you might have news about a linux port of sun's skip ip encryption protocol.

Here's the blurb from skip.incog.com: SKIP secures the network at the IP packet level. Any networked application gains the benefits of encryption, without requiring modification. SKIP is unique in that an Internet host can send an encrypted packet to another host without requiring a prior message exchange to set up a secure channel. SKIP is particularly well-suited to IP networks, as both are stateless protocols. Some of the advantages of SKIP include:

 I heard a bit about SKIP while I was at a recent IETF conference. However I must admit that it got lost in the crowd of other security protocols and issues.

So far I've paid a bit more attention to the Free S/WAN project that's being promoted by John Gilmore of the EFF. I finally got ahold of a friend of mine (Hugh Daniel -- one of the architects of Sun's NeWS project -- and well-known cypherpunk and computer security professional)

He explained that SKIP is the "Secure Key Interchange Protocol" -- that is is a key management protocol (incorporated in ISAKMP/Oakley).

For secure communications you need:

 My employer is primarily an NT shop (with sun servers), but since I develop in Java, I'm able to do my work in linux. I am one of about a dozen telecommuters in our organization, and we use on-demand ISDN to dial in directly to the office modem bank, in many cases a long distance call.

 I'm finally working on configuring my dial-on-demand ISDN line here at my place. I've had diald (dial-on-demand over a 28.8 modem) running for about a month now. I just want to cut down on that dial time.

 We're considering switching to public Internet connections, using skip to maintain security. Skip binaries are available for a few platforms (windows, freebsd, sunos), but not linux. Fortunately the source is available (http://skip.incog.com/source.html) but it's freebsd, and I don't know nearly enough deep linux to get it compiled (I tried making source modifications).

 If I understand it correctly SKIP is only a small part of the solution.

Hopefully FreeS/WAN will be available soon. You can do quite a bit with ssh (and I've heard of people who are experimenting with routing through some custom made tunnelled interface). FreeBSD and Linux both support IP tunneling now.

For information on using ssh and IP tunnels to build a custom VPN (virtual private network) look in this month's issue of Sys Admin Magazine (July '97). (Shameless plug: I have an article about C-Kermit appearing in the same issue).

Another method might be to get NetCrypto. Currently the package isn't available for Linux -- however McAfee is working on a port. Look at http://www.mcafee.com

 After much time with several search engines, the best I could come up with was another fellow also looking for a linux version of skip :) Thanks! jesse montrose

 Jesse, Sorry I took so long to answer this question. However, as I say, this stuff has changed considerably -- even in the two months between the time I started this draft message and now.

-- Jim


 ActiveX for Linux

From: Gerald Hewes [email protected]

Jim, I read your response on ActiveX in the Linux Gazette. At http://www.ssc.com/lg/issue18/lg_answer18.html#active

Software AG is porting the non GUI portions of ActiveX called DCOM to Linux. Their US site where it should be hosted appears down as I write this e-mail message but there is a link of their home page on a Linux DCOM beta: http:/www.sotwareag.com

 I beleive the link ought to be http://www.sagus.com/prod-i~1/net-comp/dcom/index.htm

 As for DCOM, its main value for the Linux community is in making Microsoft Distributed Object Technology available to the Linux community. Microsoft is trying to push DCOM over CORBA.

 I know that MS is "trying to push DCOM over CORBA" (and OpenDOC, and now, JavaBeans). I'm also aware that DCOM stands for "distributed component object model" and CORBA is the "common object request broker" and SOM is IBM's "system object model" (OS/2).

The media "newshounds" have dragged these little bones around and gnawed on them until we've all seen them. Nonetheless I don't see its "main value to the Linux community."

These "components" or "reusable objects" will not make any difference so long as significant portions of their functionality are tied to specific OS (GUI) semantics. However, this coupling between specific OS' has been a key feature of each of these technologies.

It's Apple's OpenDoc, IBM's DSOM, and Microsoft's DSOM!

While I'm sure that each as their merits from the programmer's point of view (and I'm in no position to comment on their relative technical pros or cons) -- I have yet to see any *benefit* from a user or administrative point of view.

So I suppose the question here becomes:

Is there any ActiveX (DCOM) control (component) that delivers any real benefit to any Linux user? Do any of the ActiveX controls not have a GUI component to them? What does it mean to make the "non-GUI portions" of DCOM available? Is there any new network protocol that this gives us? If so, what is that protocol good for?

For more information, checkout http://www.microsoft.com/oledev

While I encourage people to browse around -- I think I'll wait until someone can point out one DCOM component, one JavaBean, one CORBA object, or one whatever-buzzword- you-want-to-call-it-today and can explain in simple "Duh! I'm a user!" terms what the *benefit* is.

Some time ago -- in another venue -- I provided the net with an extensive commentary on the difference between "benefits" and "features." The short form is this:

I benefit is relevant to your customer. To offer a benefit requires that you understand your customer. "Features" bear no relation to a customers needs. However mass marketing necessitates the promotion of features -- since the *mass* marketer can't address individual and niche needs.

Example: Microsoft operating systems offer a "easy to use graphical interfaces" -- first "easy to use" is highly subjective. In this case it means that there are options listed on menus and buttons and the user can guess at which ones apply to their need and experiment until something works. That is a feature -- one I personally loathe. To me "easy to use" means having documentation that includes examples that are close to what I'm trying to do -- so I can "fill in the blanks" Next there is the ubiquitously touted "GUI." That's another *feature*. To me it's of no benefit -- I spend 8 to 16 hours a day looking at my screen. Text mode screens are far easier on the eyes than any monitor in graphical mode.

To some people, such as the blind GUI's are a giant step backward in accessibility. The GUI literally threatens to cut these people off from vital employment resources.

I'm not saying that the majority of the world should abandon GUI's just because of a small minority of people who can't use them and a smaller, crotchety contingent of people like me that just don't like them. I'm merely trying to point out the difference between a "feature" and a "benefit."

The "writing wizards" offered by MS Word are another feature that I eschew. My writing isn't perfect and I make my share of typos, as well as spelling and grammatical errors. However Most of what I write goes straight from my fingers to the recipient -- no proofreading and no editing. When I've experimented with spell checkers and "fog indexes" I've consistently found that my discourse is beyond their capabilities -- much too specialized and involving far too much technical terminology. So I have to over-ride more than 90% of the "recommendations of these tools.

Although my examples have highlighted Microsoft products we can turn this around and talk about Linux' famed "32-bit power" and "robust stability." These, too are *features*. Stability is a benefit to someone who manages a server -- particularly a co-located server at a remote location. However the average desktop applications user could care less about stability. So long as their application manage to autosave the last three versions of his/her documents the occasional reboot is just a good excuse to go get a cup of coffee.

Multi-user is a feature. Most users don't consider this to be a benefit -- and the idea of sharing "their" system with others is thoroughly repugnant to most modern computer users. On top of that the network services features which implement multi-user access to Linux (and other Unix systems) and NT are gaping security problems so far as most IS users are concerned. So having a multi-user system is not a benefit to must of us. This is particularly true of the shell access that most people identify as *the* multi-user feature of Unix (as opposed to the file sharing and multiple user profiles, accounts and passwords that passes for "multi-user" under Windows for Workgroups and NT).

So, getting back to ActiveX/DCOM -- I've heard of all sorts of features. I'd like to hear about some benefits. Keep in mind that any feature may be a benefit to someone -- so benefits generally have to be expressed in terms of *who* is the beneficiary.

Allegedly programmers are the beneficiary of all these competing component and object schema. "Use our model and you'll be able to impress your boss with glitzy results in a fraction of the time it would take to do any programming" (that seems to be the siren song to seduce people to any of these).

So, who else benefits?

-- Jim


 Bash String Manipulations

From: Niles Mills [email protected]

Oddly enough -- while it is easy to redirect the standard error of processes under bash -- there doesn't seem to be an easy portable way to explicitly generate message or redirect output to stderr. The best method I've come up with is to use the /proc/ filesystem (process table) like so:

function error { echo "$*" > /proc/self/fd/2 }

Hmmmm...how about good old

>&2
?
$ cat example
#!/bin/bash
echo normal
echo error >&2
$ ./example
normal
error
$ ./example > file
error
$ cat ./file
normal
$ bash -version
$ bash -version
bash -version
GNU bash, version 1.14.4(1)

Best Regards, Niles Mills

 I guess that works. I don't know why I couldn't come up with that on my own. But my comment worked -- a couple of people piped right up with the answer.

 Amigo, that little item dates back to day zero of Unix and works on all known flavors. Best of luck in your ventures.

Niles Mills


 Blinking Underline Cursor

From: Joseph Hartmann [email protected]

I know an IBM compatible PC is "capable" of having a blinking underline cursor, or a blinking block cursor.

My linux system "came" with a blinking underline, which is very difficult to see. But I have not been able (for the past several hours) to make *any* headway about finding out how to change the cursor to a blinking block.

 You got me there. I used to know about five lines of x86 assembly language to call the BIOS routine that sets the size of your cursor. Of course that wouldn't work under Linux since the BIOS is mapped out of existence during the trip into protected mode.

I had a friend who worked with me back at Peter Norton Computing -- he wrote a toy program that provided an animated cursor -- and had several need animated sequences to show with it -- a "steaming coffee cup," a "running man," and a "spinning galaxy" are the ones I remember.

If you wanted to do some kernel hacking it looks like you'd change the value of the "currcons" structure in one of the /usr/src/linux/drivers/char/ files -- maybe it would be "vga.c"

On the assumption that you are not interested in that approach (I don't blame you) I've copied the author of SVGATextMode (a utility for providing text console mode access to the advanced features of most VGA video cards)

Hopefully Koen doesn't mind the imposition. Perhaps he can help.

I've also copied Eugene Crosser and Andries Brouwer the authors of the 'setfont' and 'mapscrn' programs (which don't seem to do cursors -- but do some cool console VGA stuff). 'setfont' lets you pick your text mode console font.

Finally I've copied Thomas Koenig who maintains the Kernel "WishList" in the hopes that he'll add this as a possible entry to that.

Any hints? Best Regards,

 Joe, As you can see I don't feel stumped very often -- and now that I think about it -- I think this would be a neat feature for the Linux console. This is especially true since the people who are most likely to stay away from X Windows are laptop users -- and those are precisely the people who are most likely to need this feature.

-- Jim


 File Permissions

From: John Gotschall [email protected]

Hi! I was wondering if anyone there knew how I might actually change the file permissions on one of my linux box's DOS partition.

I have Netscape running on one box on our local network, but it can't write to another linux box's MSDOS filesystem, when that filesystem is NFS mounted. It can write to various Linux directories that have proper permissions, but the MSDOS directory won't keep a permissions setting, it keeps it stuck as owned by, read by and execute by root.

 What you're bumping into is two different issues. The default permissions under which a DOS FAT filesystem is mounted (which is "root.root 755" that is: owned by user root, group root, rwx for owner, r-x for group and other).

You can change that with options to the mount (8) command. Specifically you want to use something like:

mount -t msdos -o uid=??,gid=??,umask=775

... where you pick suitable values for the UID and GID from your /etc/passwd and /etc/group files (respectively).

The other culprit in this is the default behavior of NFS. For your own protection NFS defaults to using a feature called "root squash" (which is not a part of a vegetable). This prevents someone who has root access to some other system (as allowed by your /etc/exports file) from accessing your files with the same permissions as you're own local root account.

If you pick a better set of mount options (and put them in your /etc/fstab in the fourth field) then you won't have to worry about this feature. I DO NOT recommend that you over-ride that setting with the NFS no_root_squash option in the /etc/exports file (see 'man 5 exports' for details). I personally would *never* use that option with any export that was mounted read-only -- not even in my own home between two systems that have no live connection to the net! (I do use the no_root_squash option with the read-only option -- but that's a minor risk in my case).

 Is there a way to change the MS-DOS permissions somehow?

 Yes. See the mount(8) options for uid=, gid=, and umask=. I think you can also use the umsdos filesytem type and effectively change the permissions on your FAT based filesystem mount points.

This was a source of some confusion for me and I've never really gotten it straight to my satisfaction. Luckily I find that I hardly ever use my DOS partitions any more.


Copyright © 1997, James T. Dennis
Published in Issue 19 of the Linux Gazette July 1997


[ TABLE OF 
CONTENTS ] [ FRONT PAGE ]  Back  Next