Welcome to TiddlyWiki created by Jeremy Ruston; Copyright © 2004-2007 Jeremy Ruston, Copyright © 2007-2011 UnaMesa Association
Originally taken from http://www.perl.com/doc/FAQs/FAQ/oldfaq-html/Q5.15.html
There are three basic ways of running external commands:
{{{
system $cmd;
$output = `$cmd`;
open (PIPE, "cmd |");
}}}
In the first case, both STDOUT and STDERR will go the same place as the script's versions of these, unless redirected. You can always put them where you want them and then read them back when the system returns. In the second and third cases, you are reading the STDOUT only of your command. If you would like to have merged STDOUT and STDERR, you can use shell file-descriptor redirection to dup STDERR to STDOUT:
{{{
$output = `$cmd 2>&1`;
open (PIPE, "cmd 2>&1 |");
}}}
Another possibility is to run STDERR into a file and read the file later, as in
{{{
$output = `$cmd 2>&some_file`;
open (PIPE, "cmd 2>&some_file |");
}}}
Note that you cannot simply open STDERR to be a dup of STDOUT in your perl program and avoid calling the shell to do the redirection. This doesn't work:
{{{
open(STDERR, ">&STDOUT");
$alloutput = `cmd args`; # stderr still escapes
}}}
Here's a way to read from both of them and know which descriptor you got each line from. The trick is to pipe only STDOUT through sed, which then marks each of its lines, and then sends that back into a merged STDOUT/STDERR stream, from which your Perl program then reads a line at a time:
{{{
open (CMD,
"(cmd args | sed 's/^/STDOUT:/') 2>&1 |");
while () {
if (s/^STDOUT://) {
print "line from stdout: ", $_;
} else {
print "line from stderr: ", $_;
}
}
}}}
Be apprised that you must use Bourne shell redirection syntax in backticks, not csh!
<script>
sout = new Image();
sout.src = "letter-S.gif" ;
sover = new Image() ;
sover.src = "silvana.jpg" ;
bout = new Image();
bout.src = "letter-B.gif" ;
bover = new Image() ;
bover.src = "billy.jpg" ;
lout = new Image();
lout.src = "letter-L.gif" ;
lover = new Image() ;
lover.src = "lauren4.jpg" ;
kout = new Image();
kout.src = "letter-K.gif" ;
kover = new Image() ;
kover.src = "katie2.jpg" ;
window.AOn = function (name)
{
document.getElementById(name).src =eval(name + "over.src");
}
window.AOff = function (name)
{
document.getElementById(name).src =eval(name + "out.src");
}
</script>
When I was thinking about what to put here I had much to consider. I originally registered this domain after my wife's stroke when it became apparent she wasn't going to be her old self again. My thoughts were that at some point I would probably have to start my own business because I wasn't going to have time to put in 10 hour days as I had in the past. I have other priorities now that my family's life had been so completed altered. This site has been created so that I could earn a living doing something I enjoy and be able to spend the kind of time I now need to spend with my family. My kids have in many ways lost their mother, I felt they didn't need to lose their father as well. If your wondering where the name comes from, it represents the reasons I do this...
<html>
<center>
<img id="s" src="letter-S.gif" title="Silvana" alt="Silvana"
onmouseover="AOn('s')" onmouseout="AOff('s')" width="49" height="70">
<img id="b" src="letter-B.gif" title="Billy" alt="Billy"
onmouseover="AOn('b')" onmouseout="AOff('b')" width="49" height="70">
<img id="l" src="letter-L.gif" title="Lauren" alt="Lauren"
onmouseover="AOn('l')" onmouseout="AOff('l')" width="49" height="70">
<img id="k" src="letter-K.gif" title="Katie" alt="Katie"
onmouseover="AOn('k')" onmouseout="AOff('k')" width="49" height="70"><br>
</html>
In case your wondering what kind of guy I am to do business with let me explain it this way. I like people to make an informed decision just as I would like to. If I can't do the job then I'll be the first one to tell you. If I need to get some help to complete all aspects of a job then I'll do it. If there is someone else who you should talk to get the job done, then I will personally give you the name and number if I know them. If your going to spend your hard earned money on my services you will get what you paid for, if not more. A mutually beneficial and profitable relationship is the goal.
# At grub boot screen during boot select the kernel
# Press the 'e' key to edit the entry
# Select the line starting with the word kernel (second line)
# Press the 'e' key to edit kernel entry so that you can append single user mode option
# Append the letter 's' (or word 'single') to the end of the (kernel) line
# Press ENTER key to finish the edit
# Now press the 'b' key to boot the Linux kernel into single user mode
# Enter the root password when prompted after booting into single user mode
If you have created a kickstart config file and want to use it to install your machine then take the boot disk (cd disk1) and place it in the cdrom drive. Alternatively you could use a usb flashdrive. Boot the machine. At the boot prompt enter
{{{
linux ks=http://server/kickstart/ks.cfg
}}}
This assumes a few things.
* You have a DHCP server to hand out ip addresses. If not then you will be prompted to manually enter one.
* The server or ip address in the ks option has a webserver running and there is a kickstart directory in it with the kickstart config file entered above.
The install will pause for input if any required directives are missing from your config file.
!!Handling multiple ethernet interfaces
If you have a server with multiple physical ~NICs but only one active, you can use the
{{{
ksdevice=link
}}}
option. That way, regardless of what name (ethX) that your NIC gets assigned by the kernel, it will use the one that has a link. Alternatively you can also just specify the one to use, for example @@eth1@@
To figure out in what Run Level you are, run
{{{
/sbin/runlevel
}}}
to show the previous and the current Run Levels.
To change the run level
{{{
/sbin/telinit N
}}}
or
{{{
/sbin/init N
}}}
where N is a new run level. The run levels from /etc/inittab are:
|0|halt (Do NOT set initdefault to this)|
|1|Single user mode|
|2|Multiuser, without NFS (The same as 3, if you do not have networking)|
|3|Full multiuser mode|
|4|unused|
|5|X11|
|6|reboot (Do NOT set initdefault to this)|
To change the default run level for every boot, edit /etc/inittab and modify this line changing the current runlevel 3 to the runlevel you desire
{{{
id:3:initdefault:
}}}
To halt the system you can use
{{{
/sbin/init 0
}}}
is the same as
{{{
shutdown -h now
}}}
To reboot the system
{{{
/sbin/init 6
}}}
is the same as
{{{
shutdown -r now
}}}
My wife frequently, inadvertently, changes the language and can't figure out how to fix it. This Apple support page http://support.apple.com/kb/HT1824 details how to do it.
To check for disk errors and drive status use: iostat -En
Which would produce something like below. There is one bad disk listed, can you see it?
{{{
c0t0d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: IBM Product: DDRS39130SUN9.0G Revision: S98E Serial No: 478804 Size: 9.06GB <9055065600 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 c0t6d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: TOSHIBA Product: DVD-ROM SD-M1401 Revision: 1007 Serial No: 06/22/00 Size: 18446744073.71GB <-1 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 c0t8d0 Soft Errors: 1 Hard Errors: 37 Transport Errors: 64 Vendor: SEAGATE Product: ST336607LC Revision: 0006 Serial No: 3JA0NXZH00007327 Size: 36.70GB <36702535680 bytes> Media Error: 10 Device Not Ready: 0 No Device: 5 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0 c0t9d0 Soft Errors: 0 Hard Errors: 0 Transport Errors: 0 Vendor: SEAGATE Product: ST336704LSUN36G Revision: 032C Serial No: 3CD0JCFW00007110 Size: 36.42GB <36418595328 bytes> Media Error: 0 Device Not Ready: 0 No Device: 0 Recoverable: 0 Illegal Request: 0 Predictive Failure Analysis: 0
}}}
This is a one liner awk script that can detect the fact that yum has downloaded and installed an updated linux kernel (~CentOS). It can let you know you need to reboot to use it. I added line breaks for readability but you can remove them.
{{{
awk -v r=`uname -r` '
BEGIN { i=0; cnt=0; msg="No kernel update needed"; }
{
if ($1=="default=")
{
sub(/default=/,"",$1);
i=$1
};
if ($1=="title")
{
if (cnt==i)
{
sub(/\(/,"",$3);
sub(/\)/,"",$3);
if ($3 == r) msg="Need to install updated kernel "$3;
};
cnt++
}
}
END { print msg; }' /etc/grub.conf
}}}
This can be further modified to call an external command to send an email or write to syslog. In the END section change the print statement to a system command
{{{
END { system ("logger -p daemon.warn -i -t kernel-check "msg); }
}}}
Go to http://www.clamav.net/ and get the latest software. I am using 0.96.5 for these instructions. You need to have the following:
{{{
zlib-devel
bzip2-devel
gmp-devel
valgrind (CentOS 5 and above)
sendmail-devel
ncurses-devel
gcc4 (CentOS 4)
g++4 (CentOS 4)
}}}
When you download the software you can check using gpg. Refer to the upgrade instructions http://www.clamav.net/lang/en/support/faq/faq-upgrade/ for details. You'll need to import the gpg key the first time, then get the software and verify the sig using gpg as instructed.
{{{
gpg --import tkojm.gpg
}}}
I have this script
{{{
#!/bin/bash
VER=`cat version`
SRCURL="http://downloads.sourceforge.net/clamav/clamav-$VER.tar.gz"
SIGURL="http://downloads.sourceforge.net/clamav/clamav-$VER.tar.gz.sig"
if [ -f "clamav-$VER.tar.gz" ]; then
rm clamav-$VER.tar.gz*
fi
wget -nc $SRCURL
wget -nc $SIGURL
gpg --verify clamav-$VER.tar.gz.sig
}}}
Also get and compile the check software from http://check.sourceforge.net/
{{{
./configure CC=gcc4 CXX=g++4
make
make check
make install
}}}
Create a clamav user and group accounts.
{{{
groupadd clamav
useradd -c "ClamAV" -m -d /var/run/clamav -g clamav -s /bin/bash clamav
}}}
So you end up with this in /etc/passwd:
{{{
clamav:x:40:40:ClamAV:/var/run/clamav:/bin/bash
}}}
Unpack the compressed tar file and build the software:
{{{
./configure --disable-zlib-vcheck --enable-milter --enable-check
make
make check
make install
}}}
On ~CentOS 4 you also need these configure options
{{{
--disable-llvm CC=gcc4 CXX=g++4
}}}
Use @@VG=1@@ if you have the valgrind package installed, version 3.5 or later (~CentOS 5 and above).
!!Configuring and Using
Now let's configure it. Edit the clamd.conf file under /usr/local/etc and make the following changes:
{{{
#Example
LogSyslog yes
LogFacility LOG_MAIL
LocalSocket /var/run/clamav/clamd.socket
User clamav
}}}
Edit freshclam.conf in the same directory and make these changes:
{{{
#Example
LogSyslog yes
LogFacility LOG_MAIL
DatabaseMirror db.us.clamav.net
}}}
Add a crontab entry for the clamav user to update the virus signatures regularly. Using "crontab -u clamav -e" add the following line:
{{{
13 * * * * /usr/local/bin/freshclam --quiet
}}}
Copy the contrib init scripts to setup clamd and clamav-milter which you should find in contrib/init/~Redhat. Install them into the system. For the clamav-milter init script you will need to create /etc/sysconfig/clamav-milter and put necessary flags with socket information for it to connect to.
{{{
cp contrib/init/Redhat/* /etc/init.d/
chkconfig --add clamd
chkconfig clamd on
service clamd start
chkconfig --add clamav-milter
chkconfig clamav-milter on
}}}
Edit /etc/sysconfig/clamav-milter and add the following:
{{{
CLAMAV_FLAGS="-Hlnq /var/run/clamav/clmilter.sock"
}}}
Then start the service.
{{{
service clamav-milter start
}}}
Now edit sendmail to use your milter by editing your sendmail.mc file and adding the following:
{{{
INPUT_MAIL_FILTER(`clamav', `S=local:/var/run/clamav/clmilter.sock, F=, T=S:4m;R:4m;C:30s;E:10m')dnl
define(`confINPUT_MAIL_FILTERS', `clamav')
}}}
Now rebuild the cf file and restart sendmail:
{{{
make -C /etc/mail
service sendmail restart
}}}
The bash man page contains some good documentation—you can jump to the relevant section by executing man bash, then typing /^READ<enter>, or just page down until you come to it.
Here are a selection of handy shortcuts to get started:
|Shortcut|What it does|
|~Ctrl-A|Move to beginning of line|
|~Ctrl-E|Move to end of line|
|~Alt-F|Move forward one word|
|~Alt-B|Move backward one word|
|~Ctrl-K|Cut to end of line|
|~Alt-Backspace| Cut backwards current word|
|~Ctrl-Y|Paste from clipboard|
|Alt-.|Paste last argument from previous command|
|~Ctrl-R|Reverse search through history|
Well there is lot's to tell but the gist of it is an electronics elective in high school set me on the path to graduating from UCI as an electrical engineer. Was hired by Western Digital out of school to do VLSI chip design for network controllers. Eventually I went deep into Design Automation tools and integration and writing C code to do data transformations and validation. All the major toolsets were used. Eventually on to Standard Microsystems and then later to designing hardware to sell for a VAR and lots of network consulting. Eventually I moved on to UCI to work in the Office of Information Technology working on large scale campus services and systems in linux and transitioning from Solaris to Linux. The field is constantly evolving and changing so continuous education is required. I love a good problem to solve and look forward to helping you solve yours.
The “cpuspeed” service that throttles the CPU and runs the cpu at half speed to save power can be turned off.
{{{
service cpuspeed status
Frequency scaling enabled using ondemand governor
}}}
View the effect on the cpu (output trimmed down quite a bit here)
{{{
cat /proc/cpuinfo
model name : Intel(R) Xeon(R) CPU X5570 @ 2.93GHz
stepping : 5
cpu MHz : 1600.000
}}}
and to disable it
{{{
service cpuspeed stop
Disabling ondemand cpu frequency scaling: [ OK ]
chkconfig cpuspeed off
}}}
To stay logged in for long processes to run you need to stop tcsh from automatically logging you out. Do this by setting the variable autologout to zero.
{{{
set autologout=0
}}}
This can be done in your .cshrc file or .tcshrc file if you prefer.
If your imap server support RFC for managing quotas or at least just the part to display them then you can make a couple of adjustments in your Thunderbird settings to display your quota usage. By default it doesn't display anything unless you cross the warning threshold. I'd kinda like to see it all the time. Here's how.
There now are three key that affect the display of quotas with Thunderbird:
*mail.quota.mainwindow_threshold.show
**in percent, when the quota meter starts showing up at all
**(you should set this key to 0 if you want to always display it).
*mail.quota.mainwindow_threshold.warning
**in percent, when it gets yellow.
*mail.quota.mainwindow_threshold.critical
**in percent, when it gets red.
To change these keys, open Thunderbird Tools, Options, go to “Advanced” section, then “General” tab, then click onto “Config Editor…” button. In the popup window, enter key name into search field, then double click on it to change its value.
Here are some useful (to me) documents I have collected for reference over time.
[[TiddlyWiki Cheat Sheet|docs/tiddlywiki_cheatsheet.pdf]]
[[GAWK|docs/gawk.pdf]]
[[Bash Beginners Guide|docs/Bash-Beginners-Guide.pdf]]
[[Procmail How To|docs/Procmail How-To Page.pdf]]
You can set the root password in your kickstart config file by usnig the rootpw option. In order not to put the plain text password in the file you can do the following to generate the encrypted form to give as an argument to rootpw using the iscrypted argument.
Using a current version of perl
{{{
perl -e 'print crypt("password","salt")."\n";'
}}}
replace 'password' with the word you want to encrypt, and 'salt' with a random salt string.
For DES style passwords, salt should be any two characters
{{{
perl -e 'print crypt("secret","Hi")."\n";'
}}}
For ~MD5 passwords, you use the format '$1$salt' where the '$1$' is literal, and the salt is up to 8 characters
{{{
perl -e 'print crypt("secret","\$1\$guesswho")."\n";'
}}}
There are a handful of environment variables that you can set that will have an affect on your ability to compile software.
For compiling C code
{{{
CC, CFLAGS
}}}
For compiling C++ code
{{{
CXX, CPPFLAGS
}}}
For the linking stage
{{{
LDFLAGS
}}}
For compiling any code
{{{
TMPDIR
}}}
After the code is compiled
{{{
LD_LIBRARY_PATH
}}}
You can do this
{{{
export CC CFLAGS CXX CPPFLAGS TMPDIR LDFLAGS LD_LIBRARY_PATH
}}}
You don't have to set environment variables for GCC to use them. You could simply add them to the configure command line as shown below:
{{{
./configure CC=gcc CFLAGS=-O3 LIBS=-lposix
}}}
Let's take a look at what each of these do:
CC=gcc, tells configure that your C Compiler (CC) is gcc.
CFLAGS=-O3, tells configure that you want the compiler to use the following optimization flag. By using optimization flags you can adjust how the code is compiled. For instance, -O3 tells the compiler to optimize the code for speed. You should check the GCC manual for a listing of all of the different flags that you can set to see if there are any that you think you should use.
If the compiler cannot find a filename.h file, you can tell it to look in the directories listed here to help the compiler find the files it needs to compile the program. An example of this would be:
{{{
CFLAGS=-I/usr/local/include
}}}
If the program that you're trying to compile needs to access the network then be sure to include the following libraries: socket and nsl. In the makefile these should show up as -lsocket -lnsl.
CXX=g++, tells configure that your C++ compiler is named g++, the GNU equivalent of c++.
CPPFLAGS=-I/usr/local/include, same as CFLAGS listed above, except for use with C++.
LDFLAGS="-L/usr/sfw/lib -R/usr/sfw/lib"
Linking is the final stage of compiling. One of the things that you need to be concerned with is where do the programs libraries go. Libraries can be thought of as binaries as the system does not know where they are unless you specifically tell it. When you go to run a program you will sometimes run into a situation where you need to set the ~LD_LIBRARY_PATH environment variable.
On Solaris 8 and above you can get around this by using the crle command instead. Or you can fix the issue right here so you don't have to do either. The ~LD_LIBRARY_PATH variable is to libraries what PATH is to executables. When you run a program that is dynamically linked it will search ~LD_LIBRARY_PATH directories to find the libraries that it needs to run. If the program cannot find all of it's libraries it will not run. Just like PATH, ~LD_LIBRARY_PATH is a colon-delimited list of directories.
Originally found here http://www.ilkda.com/compile/Environment_Variables.htm
To extend the logical volume using LVM. A physical volume name can be optionally given to allocate the new space on a particular physical volume. This is handly if multiple physical volumes comprise your logical volume and it matters to you where the blocks are allocated.
{{{
lvextend -L+1GB /dev/vg00/lvol00 [pvname]
}}}
To resize the filesystem on that logical volume you will need to umount it, run fsck on it and then you can resize it and finally mount it.
{{{
umount /freespace
e2fsck -f /dev/vg00/lvol00
resize2fs /dev/vg00/lvol00
mount /freespace
}}}
Use df to verify if you wish.
In order to force a Linux system to perform a full file system check upon a reboot you merely need to create the /forcefsck file. This will force an fsck to be run the next time the system reboots. Do the following
{{{
su -
touch /forcefsck
reboot
}}}
[[Leo's Icon Archive|http://www.iconarchive.com/]]
So I want to secure my website or other services with a certificate. This can be done using a outside certificate service or with self signed certificates. The first step in the process is to generate your private key so you know it you. Keep this stored away from prying eyes. I'm going to generate a 2k (2048) bit rsa key using strong AES 256 bit encryption. Why not make it as good as it can be after all.
{{{
openssl genrsa -aes256 -out my.key 2048
}}}
During the key generation process you'll be asked for a pass-phrase to protect the key against prying eyes. Don't forget it because you'll be out of luck later if you lose it.
Once the private key is generated a Certificate Signing Request (CSR) can be generated. The CSR is then used in one of two ways. Ideally, the CSR will be sent to a Certificate Authority, such as Thawte or Verisign or ~StartSSL who will verify the identity of the requester and issue a signed certificate. The second option is to self-sign the CSR, which will be demonstrated a little later.
During the generation of the CSR, you will be prompted for several pieces of information. These are the X.509 attributes of the certificate. One of the prompts will be for your "Common Name". It is important that this field be filled in with the fully qualified domain name (FQDN) of the server to be protected. The command to generate the CSR is as follows.
{{{
openssl req -new -key my.key -out my.csr
}}}
One unfortunate side effect of the pass-phrased private key is that the service being protected will ask for the pass-phrase each time it is started. Since there is nobody around to type in the pass-phrase, think reboot or crash, we need a copy that is unencrypted so the service can start automatically. It is critical that this file only be readable by the root user! If your system is ever compromised and a third party obtains your unencrypted private key, the corresponding certificate will need to be revoked. Use the following command to remove the pass-phrase from the key:
{{{
openssl rsa -in my.key -out my.key.pem
}}}
At this point you will either hand the CSR file over to a certificate authority (CA) and they will give you a certificate file or you will need to generate a self-signed certificate. A self signed certificate is usually used for testing or because you don't want to pay for one. A self signed certificate will generate an error in the client browser / application saying that the signing certificate authority is unknown and not trusted.
To generate a temporary certificate which is good for 365 days, issue the following command:
{{{
openssl x509 -req -days 365 -in my.csr -signkey my.key -out my.crt
}}}
Now you can install your unencrypted key and certificate for use by your application. For modern linux versions you would put them here.
{{{
cp my.pem /etc/pki/tls/private
chmod 600 /etc/pki/tls/private/my.pem
cp my.crt /etc/pki/tls/certs
chmod 644 /etc/pki/tls/certs/my.crt
}}}
Make sure the permissions on the files are appropriate otherwise you could disclose your unencrypted private key. That would be bad.
Fyi, if you want to make verify whether your private key signed your certificate then run these commands on your pem and crt files and see if they match.
{{{
openssl x509 -noout -modulus -in my.crt | openssl md5
openssl rsa -noout -modulus -in my.key | openssl md5
}}}
If both values match, that means the private key is the right key for your certificate. One other important thing to note, if your certificate provider needs you to use an intermediate key then you need to add it to the end of your crt file or the certificate won't be valid. Sometimes the application will have a configuration line for intermediate certs and you can just point to it in a separate file. More often than not however you have to add it to the end of your certificate. Remember that there is a specific order to follow.
* certificate
* intermediate CA certificate
* other CA certificates
How can I be sure in the first time I connect, that I am connecting to the right ssh server, and not one "Man-in-the-middle" server? I would like to know if there is a way to obtain the fingerprint on the server so that I can really be sure when I get this type of message:
{{{
The authenticity of host 'nnn.nnn.nnn.nnn' can't be established.
RSA key fingerprint is xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx:xx.
Are you sure you want to continue connecting (yes/no)?
}}}
The way to generate the fingerprint based on a given key Is to do:
{{{ssh-keygen -l -f server-public-key.pub}}}
Of course you will need to get the public key to generate the fingerprint.
Run the tw_cli application and from the prompt enter:
{{{
//antarctica> info
Ctl Model Ports Drives Units NotOpt RRate VRate BBU
------------------------------------------------------------------------
c2 9500S-8MI 8 8 1 0 1 1 -
//antarctica>
}}}
to see the list of controllers, then to get specific info on your controller:
{{{
//antarctica> info c2
Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify IgnECC
------------------------------------------------------------------------------
u0 RAID-5 OK - 64K 1396.92 ON OFF OFF
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 233.76 GB 490234752 Y651QBLE
p1 OK u0 233.76 GB 490234752 Y651HRCE
p2 OK u0 233.76 GB 490234752 Y651QBHE
p3 OK u0 233.76 GB 490234752 Y653FFZE
p4 OK u0 233.76 GB 490234752 Y653G0VE
p5 OK u0 233.76 GB 490234752 Y653G0PE
p6 OK - 233.76 GB 490234752 Y65XBYEE
p7 OK u0 233.76 GB 490234752 Y63PLESE
//antarctica>
}}}
Enter "quit" to exit.
To get started with this blank TiddlyWiki, you'll need to modify the following tiddlers:
* SiteTitle & SiteSubtitle: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* MainMenu: The menu (usually on the left)
* DefaultTiddlers: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
An SSL certificate contains a wide range of information: issuer, valid dates, subject, and some hardcore crypto stuff. The x509 subcommand is the entry point for retrieving this information. The examples below all assume that the certificate you want to examine is stored in a file named cert.pem.
Using the -text option will give you the full breadth of information.
{{{
openssl x509 -text -in cert.pem
}}}
Other options will provide more targeted sets of data.
Who issued the cert?
{{{
openssl x509 -noout -in cert.pem -issuer
}}}
To whom was it issued?
{{{
openssl x509 -noout -in cert.pem -subject
}}}
What dates is it valid?
{{{
openssl x509 -noout -in cert.pem -dates
}}}
What is its ~MD5 fingerprint?
{{{
openssl x509 -noout -in cert.pem -fingerprint
}}}
But I want to extract infor from my CSR file because I don't remember what I filled in last time, what do I do?
{{{
openssl req -text -in cert.csr
}}}
* Plug in the USB device.
* List the scsi device partitions attached and find your USB device.
{{{
fdisk -l
}}}
Something similiar to /dev/sdb1 shoud be listed.
* Now create a directory to mount the USB device partition to.
{{{
mkdir /mnt/usb
}}}
* Now mount the USB device.
{{{
mount /dev/sdb1 /mnt/usb
}}}
* Browse the /mnt/usb directory to view the contents of the USB device.
If you are playing with SCSI devices (like Fibre Channel, SAS, ..) you sometimes need to rescan the SCSI bus to add devices or to tell the kernel a device is gone. Well, this is the way to do it in ~CentOS with versions that have a 2.6 kernel. This means ~CentOS 5 and ~CentOS 4 (starting from update 3).
1. Find what's the host number for the HBA:
{{{
ls /sys/class/fc_host/
}}}
(You'll have something like host1 or host2, I'll refer to them as host$NUMBER from now on)
2. Ask the HBA to issue a LIP signal to rescan the FC bus:
{{{
echo 1 >/sys/class/fc_host/host$NUMBER/issue_lip
}}}
3. Wait around 15 seconds for the LIP command to have effect
4. Ask Linux to rescan the SCSI devices on that HBA:
{{{
echo - - - >/sys/class/scsi_host/host$NUMBER/scan
}}}
The wildcards "- - -" mean to look at every channel, every target, every lun.
That's it. You can look for log messages at "dmesg" to see if it's working, and you can look at /proc/scsi/scsi to see if the devices are there. In ~CentOS 5 there is also the "lsscsi" command that will show you the know devices plus the device entries that point to those devices (very usefull).
The default desktop for the VNC Server is "TWM", though most people are used to KDE or Gnome instead. Here is how to change it:
In your home directory edit the file {{{.vnc/xstartup}}}
Find the line {{{twm &}}}
For KDE, replace with {{{exec startkde}}}
For Gnome, replace {{{exec gnome-session}}}
To use whatever you have selected as your default X desktop replace with {{{exec startx}}}
Kill any existing VNC servers with {{{vncserver -kill :xxx}}} where xxx is the display number.
Start a new server.
First, the program must be compiled with debugging information otherwise the information that gdb can display will be fairly cryptic. Second, the program must have crashed and left a core file. It should tell you if it has left a core file with the message "core dumped".
The command line to start gdb to look at the core file is:
{{{
gdb program core
}}}
where "program" is the name of the program you're working on. Gdb will then load the program's debugging information and examine the core file to determine the cause of the crash.
The last line that gdb will print before the "(gdb)" prompt will be something like:
{{{
#0 0xef607e54 in main() at line 344 in main.c
}}}
This corresponds to the last statement that was attempted which likely caused the crash.
You can find out which function called the current function by using the "up" command which will print out a similar line. The "down" command does the opposite of the "up" command. Finally, to view the entire stack frame, use the "backtrace" command or it's abbreviation "bt".
/***
|Name|InlineJavascriptPlugin|
|Source|http://www.TiddlyTools.com/#InlineJavascriptPlugin|
|Documentation|http://www.TiddlyTools.com/#InlineJavascriptPluginInfo|
|Version|1.9.5|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|Insert Javascript executable code directly into your tiddler content.|
''Call directly into TW core utility routines, define new functions, calculate values, add dynamically-generated TiddlyWiki-formatted output'' into tiddler content, or perform any other programmatic actions each time the tiddler is rendered.
!!!!!Documentation
>see [[InlineJavascriptPluginInfo]]
!!!!!Revisions
<<<
2009.04.11 [1.9.5] pass current tiddler object into wrapper code so it can be referenced from within 'onclick' scripts
2009.02.26 [1.9.4] in $(), handle leading '#' on ID for compatibility with JQuery syntax
|please see [[InlineJavascriptPluginInfo]] for additional revision details|
2005.11.08 [1.0.0] initial release
<<<
!!!!!Code
***/
//{{{
version.extensions.InlineJavascriptPlugin= {major: 1, minor: 9, revision: 5, date: new Date(2009,4,11)};
config.formatters.push( {
name: "inlineJavascript",
match: "\\<script",
lookahead: "\\<script(?: src=\\\"((?:.|\\n)*?)\\\")?(?: label=\\\"((?:.|\\n)*?)\\\")?(?: title=\\\"((?:.|\\n)*?)\\\")?(?: key=\\\"((?:.|\\n)*?)\\\")?( show)?\\>((?:.|\\n)*?)\\</script\\>",
handler: function(w) {
var lookaheadRegExp = new RegExp(this.lookahead,"mg");
lookaheadRegExp.lastIndex = w.matchStart;
var lookaheadMatch = lookaheadRegExp.exec(w.source)
if(lookaheadMatch && lookaheadMatch.index == w.matchStart) {
var src=lookaheadMatch[1];
var label=lookaheadMatch[2];
var tip=lookaheadMatch[3];
var key=lookaheadMatch[4];
var show=lookaheadMatch[5];
var code=lookaheadMatch[6];
if (src) { // external script library
var script = document.createElement("script"); script.src = src;
document.body.appendChild(script); document.body.removeChild(script);
}
if (code) { // inline code
if (show) // display source in tiddler
wikify("{{{\n"+lookaheadMatch[0]+"\n}}}\n",w.output);
if (label) { // create 'onclick' command link
var link=createTiddlyElement(w.output,"a",null,"tiddlyLinkExisting",wikifyPlainText(label));
var fixup=code.replace(/document.write\s*\(/gi,'place.bufferedHTML+=(');
link.code="function _out(place,tiddler){"+fixup+"\n};_out(this,this.tiddler);"
link.tiddler=w.tiddler;
link.onclick=function(){
this.bufferedHTML="";
try{ var r=eval(this.code);
if(this.bufferedHTML.length || (typeof(r)==="string")&&r.length)
var s=this.parentNode.insertBefore(document.createElement("span"),this.nextSibling);
if(this.bufferedHTML.length)
s.innerHTML=this.bufferedHTML;
if((typeof(r)==="string")&&r.length) {
wikify(r,s,null,this.tiddler);
return false;
} else return r!==undefined?r:false;
} catch(e){alert(e.description||e.toString());return false;}
};
link.setAttribute("title",tip||"");
var URIcode='javascript:void(eval(decodeURIComponent(%22(function(){try{';
URIcode+=encodeURIComponent(encodeURIComponent(code.replace(/\n/g,' ')));
URIcode+='}catch(e){alert(e.description||e.toString())}})()%22)))';
link.setAttribute("href",URIcode);
link.style.cursor="pointer";
if (key) link.accessKey=key.substr(0,1); // single character only
}
else { // run script immediately
var fixup=code.replace(/document.write\s*\(/gi,'place.innerHTML+=(');
var c="function _out(place,tiddler){"+fixup+"\n};_out(w.output,w.tiddler);";
try { var out=eval(c); }
catch(e) { out=e.description?e.description:e.toString(); }
if (out && out.length) wikify(out,w.output,w.highlightRegExp,w.tiddler);
}
}
w.nextMatch = lookaheadMatch.index + lookaheadMatch[0].length;
}
}
} )
//}}}
// // Backward-compatibility for TW2.1.x and earlier
//{{{
if (typeof(wikifyPlainText)=="undefined") window.wikifyPlainText=function(text,limit,tiddler) {
if(limit > 0) text = text.substr(0,limit);
var wikifier = new Wikifier(text,formatter,null,tiddler);
return wikifier.wikifyPlain();
}
//}}}
// // GLOBAL FUNCTION: $(...) -- 'shorthand' convenience syntax for document.getElementById()
//{{{
if (typeof($)=='undefined') { function $(id) { return document.getElementById(id.replace(/^#/,'')); } }
//}}}
Kickstart config file installation options
* install
Tells the system to install a fresh system rather than upgrade an existing system. This is the default mode. For installation, you must specify the method (source) you will be installing from. The install command and the installation method command must be on separate lines.
* cdrom
Install from the first ~CD-ROM drive on the system.
* harddrive
Install from a ~RedHat installation tree (directory) on a local drive, which must be either vfat or ext2.
{{{
harddrive --partition=hdb2 --dir=/tmp/install-tree
}}}
* nfs
Install from the NFS server and directory specified.
{{{
nfs --server=server.example.com --dir=/tmp/install-tree
}}}
* url
Install from an installation tree on a remote server via FTP or HTTP. For example:
{{{
url --url http://server/dir
}}}
or:
{{{
url --url ftp://username:password@server/dir
}}}
Here's a quick start step by step reminder to getting LVM volumes online.
To review your physical volumes pvdisplay
To create a new physical volume pvcreate /dev/sdc
To review your volume groups vgdisplay
To create a new volume group vgcreate ~VolGroupData /dev/sdb
To add a new physical volume to your volume group vgextend ~VolGroupData /dev/sdc
To review your logical volume lvdisplay
To create a new logical volume lvcreate -L 600G -n Cad ~VolGroupData
To activate a logical volume lvchange -ay /dev/mapper/~VolGroupData-Cad
To create it on a specific physical volume lvcreate -L 600G -n Cad ~VolGroupData /dev/sdc
To build a filesystem on the logical volume mke2fs -j /dev/~VolGroupData/Cad
To label the filesystem e2label /dev/~VolGroupData/Cad cad
To deactivate a logical volume lvchange -an /dev/mapper/~VolGroupData-Cad
To remove a logical volume lvremove /dev/mapper/~VolGroupData-Cad
Whenever you create multiple partitions or a non fat(32) file system in a partition on the stick the inquiry of the stick timeouts. The default linux timeout is set for 5 secs, and the stick takes 14.5 seconds. So if you set the timeout to 15 seconds or more, and reinsert the stick, the stick should be mountable and readable. Just remember to add this to your boot scripts, or repeat it every time you reboot the kernel.
The inquiry timeout can be set to 15 sec with the following command line:
{{{
echo 15 >/sys/module/scsi_mod/parameters/inq_timeout
}}}
Or if you prefer the grub command line is :
{{{
scsi_mod.inq_timeout=15
}}}
To save your printer configuration, type this command as root:
{{{
/usr/sbin/system-config-printer-tui --Xexport > settings.xml
}}}
Your configuration is saved to the file settings.xml. Since this file is saved, it can be used to restore the printer settings. To restore the configuration, type this command as root:
{{{
/usr/sbin/system-config-printer-tui --Ximport < settings.xml
}}}
After importing the configuration file, you must restart the printer daemon. Issue the following command as root:
{{{
/sbin/service cups restart
}}}
<html>
<b>
<a href="javascript:void(0)" onclick="story.closeAllTiddlers();story.displayTiddlers(null,['Today'])">Today</a><br>
<a href="javascript:void(0)" onclick="story.closeAllTiddlers();story.displayTiddlers(null,['About SBLK'])">About SBLK</a><br>
<a href="javascript:void(0)" onclick="story.closeAllTiddlers();story.displayTiddlers(null,['DavidSeverance'])">Me</a><br>
<a href="javascript:void(0)" onclick="story.closeAllTiddlers();story.displayTiddlers(null,['Services'])">Services For Hire</a><br>
<a href="javascript:void(0)" onclick="story.closeAllTiddlers();story.displayTiddlers(null,['Documents'])">Documents</a><br>
<a href="javascript:void(0)" onclick="story.closeAllTiddlers();story.displayTiddlers(null,['Useful Links'])">Useful Links</a><br>
<a href="javascript:void(0)" onclick="story.closeAllTiddlers();story.displayTiddlers(null,['Fun Stuff'])">Fun Stuff</a><br>
</b>
</html>
To make an ISO from your CD/DVD, place the media in your drive but do not mount it. If it automounts, unmount it.
{{{
dd if=/dev/dvd of=dvd.iso # for dvd
dd if=/dev/cdrom of=cd.iso # for cdrom
dd if=/dev/scd0 of=cd.iso # if cdrom is scsi
}}}
To make an ISO from files on your hard drive, create a directory which holds the files you want. Then use the mkisofs command.
{{{
mkisofs -o /tmp/cd.iso /tmp/directory/
}}}
This results in a file called cd.iso in folder /tmp which contains all the files and directories in /tmp/directory/.
For more info, see the man pages for mkisofs, losetup, and dd, or see the ~CD-Writing-HOWTO at http://www.tldp.org.
If you want to create ISO images from a CD and you're using Windows, Cygwin has a dd command that will work. Since dd is not specific to cdroms, it will also create disk images of floppies, hard drives, zip drives, etc.
There is a nice how to article http://wiki.centos.org/HowTos/MigrationGuide which shows how to do it for ~RHEL5 to ~CentOS5
I did this to go from an expired ~RHEL4 machine to ~CentOS4. This was with a 4.8 64 bit install, adjust accordingly for your host.
First I downloaded some ~CentOS rpms I needed.
{{{
wget -nc http://mirrors.kernel.org/centos/4.8/os/x86_64/CentOS/RPMS/centos-release-4-8.x86_64.rpm
wget -nc http://mirrors.kernel.org/centos/4.8/os/x86_64/CentOS/RPMS/python-elementtree-1.2.6-5.el4.centos.x86_64.rpm
wget -nc http://mirrors.kernel.org/centos/4.8/os/x86_64/CentOS/RPMS/python-sqlite-1.1.7-1.2.1.x86_64.rpm
wget -nc http://mirrors.kernel.org/centos/4.8/os/x86_64/CentOS/RPMS/python-urlgrabber-2.9.8-2.noarch.rpm
wget -nc http://mirrors.kernel.org/centos/4.8/os/x86_64/CentOS/RPMS/sqlite-3.3.6-2.x86_64.rpm
wget -nc http://mirrors.kernel.org/centos/4.8/os/x86_64/CentOS/RPMS/yum-2.4.3-4.el4.centos.noarch.rpm
wget -nc http://mirrors.kernel.org/centos/4.8/os/x86_64/CentOS/RPMS/yum-metadata-parser-1.0-8.el4.centos.x86_64.rpm
}}}
Remove the RHEL stuff and turn off the RHN update service daemon.
{{{
rpm -e --nodeps redhat-release
rpm -e rpmdb-redhat
service rhnsd stop
chkconfig rhnsd off
chkconfig --list rhnsd
}}}
Get and install your ~CentOS GPG key.
{{{
wget -nc http://mirror.centos.org/centos/RPM-GPG-KEY-centos4
rpm -import RPM-GPG-KEY-centos4
}}}
Install the rpms you downloaded earlier.
{{{
rpm -Uvh *.rpm
}}}
Fire up yum to update your machine.
{{{
yum clean all
chkconfig yum on
service yum start
yum update
}}}
You'll probably have a kernel update and need to reboot.
Listing the databases
{{{
mysql -u username -p pw -s -N -e "show databases"
}}}
I also find these commands useful in addition to the above for listing other items of interest.
{{{
show processlist
show variables
show status list 'qcache%'
}}}
A nice perl script http://mysqltuner.com/ gives a quick look into your server setup.
Backing up a database
{{{
mysqldump -u username -p pw --single-transaction database_name > database_name.sql
}}}
Alternatively you can dump all the databases
{{{
mysqldump -u username -p pw --single-transaction --all-databases > all.sql
}}}
To restore a database or databases
{{{
mysql -u username -p database_name < database_name.sql
}}}
An appropriately named empty database will need to exist in order to restore it.
If you are restoring from scratch then using the dump of all goes like this on fresh hardware operating system loaded since no passwords or privileges have been setup.
{{{
mysql -u root -p < fulldump.sql
service mysql restart
}}}
Okay but you say you need to recover one database from your complete dump file then you'll need to do this otherwise you'll get them all restored which may not be what you expected.
{{{
mysql -u root -p --one-database database_name < fulldump.sql
}}}
Make sure your ~NetApp is licensed for this, it's free but the license needs to be in place to use it.
Enable sis (deduplication) on the volume
{{{
filer> sis on /vol/test
SIS for “/vol/test” is enabled.
Already existing data could be processed by running “sis start -s /vol/test”.
}}}
~DeDup the existing data
{{{
filer> sis start -s /vol/test
The file system will be scanned to process existing data in /vol/test.
This operation may initialize related existing metafiles.
Are you sure you want to proceed with scan (y/n)? y
Thu Nov 13 10:01:38 EST [wafl.scan.start:info]: Starting SIS volume scan on volume test.
The SIS operation for “/vol/test” is started.
}}}
You can view the status on the volume using the following command, if you don't give a volume name then all volumes are displayed.
{{{
filer> sis status /vol/test
Path State Status Progress
/vol/test Enabled Idle Idle for 00:11:23
}}}
After sis finishes deduping your data, use df to show the amount saved and deduplication percentage
{{{
filer> df -sh /vol/test
Filesystem used saved %saved
/vol/test/ 519GB 754GB 59%
}}}
Here are the quick steps to remote mount a share from a Windows server to a Linux machine. You must have sudo and Samba installed.
In /etc/hosts add the share to mount:
{{{
192.168.1.9/Data /smb/data/sev smbfs noauto,ro,user,username=sev,workgroup=MyWorkGroup 0 0
}}}
Create the directory mount point:
{{{
mkdir -p /smb/data/sev
}}}
In /etc/sudoers add the following:
{{{
sev antarctica=(root) NOPASSWD: /bin/mount, /usr/bin/smbumount
}}}
In order to mount and unmount the share use:
{{{
sudo mount /smb/data/sev
sudo smbumount /smb/data/sev
}}}
You will need to create individual entries for each user.
When upgrading from ~RHEL4 to ~RHEL5 I ran across a problem in xinetd because of a new feature added called per source ip limit (per_source). I was running an email system that was behind a proxy service (perdition) and because all the inbound traffic was from the same source ip the limit was quickly hit and service was denied. To get the old, unlimited, behavior add this to the xinetd service file.
{{{
per_source = UNLIMITED
}}}
Then restart xinetd.
For Solaris 7 load the ~SUNWski package from the Easy Access Server cd from the ~WebServer package. Then create a soft link /dev/urandom to /dev/random.
For Solaris 8 you need to install patch 112438-02 which will add the /dev/random and /dev/urandom devices after a reconfigure reboot.
Solaris 9 includes this functionality in the base OS.
I need to fill in more description but I also needed to just jot them down
{{{
-A RH-Firewall-1-INPUT -p tcp --dport 22 --syn -m limit --limit 1/m --limit-burst 3 -j ACCEPT
}}}
Hitcount method
{{{
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport ssh -m state --state NEW -m recent --name sshprobe --update --seconds 60 --hitcount 3 -j DROP
-A RH-Firewall-1-INPUT -p tcp -m tcp --dport ssh -m state --state NEW -m recent --name sshprobe --set -j ACCEPT
}}}
If you need to redirect your visitors to a new page, this HTML redirect code may be just what you're looking for. Place the following HTML redirect code between the <HEAD> and </HEAD> tags of your HTML code.
{{{
<meta HTTP-EQUIV="REFRESH" content="0; url=http://www.widgets.com/index.html">
}}}
The above HTML redirect code will redirect your visitors to another web page instantly. The content="0; may be changed to the number of seconds you want the browser to wait before redirecting.
The typical process for creating an SSL certificate
{{{
openssl genrsa -des3 -out www.key 2048
}}}
This will create a rsa key of 2k bytes in the file new.key. During the process you will be asked for a passphrase. Alternatively you can do it without a passphrase as such
{{{
openssl genrsa -out my.key 2048
}}}
Next you need to create a csr file to send to the cert authority
{{{
openssl req -new -key my.key -out my.csr
}}}
They will send you back a crt file (my.crt which is your signed certificate)
Utilizing a passphrase, is a good thing, but from a practical standpoint it's going to be a problem during the boot process as whatever service is using the cert is going to wait for someone or something to enter the passphrase. That's not very useful. To remove the passphrase so the service can start unattended you'll need to run this command to create an unencrypted version. It will prompt for the passphrase during the process.
{{{
openssl rsa -in my.key -out my.pem
}}}
Use this file and the service will not ask for a passphrase when started.
Verify your current kernel version
{{{
uname -r
}}}
Output:
{{{
2.6.9-89.0.20.ELsmp
}}}
List all the installed kernels using rpm command, don't forget the smp kernels
{{{
rpm -q kernel kernel-smp
}}}
Output:
{{{
kernel-smp-2.6.9-89.0.15.EL
kernel-smp-2.6.9-89.0.16.EL
kernel-smp-2.6.9-89.0.18.EL
kernel-smp-2.6.9-89.0.19.EL
kernel-smp-2.6.9-89.0.20.EL
kernel-2.6.9-89.0.15.EL
kernel-2.6.9-89.0.16.EL
kernel-2.6.9-89.0.18.EL
kernel-2.6.9-89.0.19.EL
kernel-2.6.9-89.0.20.EL
}}}
Remove old kernels. Do not remove the kernel currently running. I also like to keep at least the last kernel around just in case.
Choose which kernel you want to uninstall from the list of those installed. Type the following command to remove the kernel package(s) under RHEL / ~CentOS / Fedora Linux. For example:
{{{
rpm -e kernel-2.6.9-89.0.15.EL kernel-smp-2.6.9-89.0.15.EL
}}}
Edit your grub config file to remove all the kernel(s) you just removed.
The proper procedure for replacing a failed drive is to carefully identify either using the web gui or via the command line using:
{{{
tw_cli info
Ctl Model Ports Drives Units NotOpt RRate VRate
------------------------------------------------------------------
c2 9500S-8 8 8 2 0 4 4
c3 9500S-8 8 8 2 1 4 4
}}}
Note the controller that has the ~NotOpt count that isn’t zero and get a more detailed info report on it using the command:
{{{
tw_cli info c3
Unit UnitType Status %Cmpl Stripe Size(GB) Cache AVerify OvrECC
------------------------------------------------------------------------------
u0 RAID-5 DEGRADED - 64K 1396.92 ON OFF ON
u1 SPARE OK - - 233.753 - OFF -
Port Status Unit Size Blocks Serial
---------------------------------------------------------------
p0 OK u0 233.76 GB 490234752 Y6157GTE
p1 OK u0 233.76 GB 490234752 Y6157EKE
p2 OK u0 233.76 GB 490234752 Y6157GME
p3 OK u0 233.76 GB 490234752 Y6157CME
p4 OK u0 233.76 GB 490234752 Y615797E
p5 BAD u0 233.76 GB 490234752 Y615789E
p6 OK u1 233.76 GB 490234752 Y6157EME
p7 OK u0 233.76 GB 490234752 Y6157EXE
}}}
Remove this drive from the C3 slot P5. Remove this drive for diagnostics and RMA processing. Install the spare drive in this tray and put it back into the chassis. Now that the drive is re-installed you need to make it a hot spare.
First scan the controller for the new drive by issuing the following command:
{{{
tw_cli maint rescan c3
Rescanning controller /c3 for units and drives ...Done.
}}}
Now you will need to create the hot spare unit, issue the command:
{{{
tw_cli maint createunit c3 rSPARE p5
}}}
And you are done!
Let's say you've been experimenting with another OS like Linux or ~FreeBSD and now you want to restore your MBR. Do the following at a DOS prompt:
{{{
fdisk /MBR
}}}
The basic order is:
*Create the quotas file on the partition.
*Edit the quotas with edquota.
*Run quotacheck to calculate quotas of existing files.
*Turn on the quota system with quotaon.
*Edit the vfstab file to permanently enable them.
Let's say you have a set of unix filesystems as illustrated by "df -k"
{{{
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 2055463 1055685 938115 53% /
/proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
fd 0 0 0 0% /dev/fd
/dev/dsk/c0t0d0s3 1017831 90227 866535 10% /var
swap 1372824 32 1372792 1% /var/run
swap 1372792 0 1372792 0% /tmp
/dev/dsk/c0t0d0s7 30209110 2715191 27191828 10% /export/home
/dev/dsk/c0t0d0s4 4130982 1158300 2931373 29% /usr/local
}}}
And you want to enable quotas on /export/home for various users. First you need to go to that root directory and create an empty "quotas" file owned by root.
{{{
cd /export/home
touch quotas
}}}
Now you can enable quotas by running quotaon.
{{{
quotaon /export/home
}}}
To turn quota's off use the quotaoff command.
{{{
quotaoff /export/home
}}}
Of course if you want quotas on the next time you reboot you will need to edit the /etc/vfstab and add the "quota" option to the desired mount point(s).
Now you need to set some quota's, use the edquota command. This will invoke "vi" to edit the quotas file for the user in question.
{{{
edquota fred
}}}
Now you would probably like the quota system to account for all the files the user(s) have created before the quota system was turned on. You need to run the quotacheck command to do this.
{{{
quotacheck /export/home
}}}
To run quotacheck on a logging filesystem you must use the "-p" and "-f" options.
To see a full quota report, use the repquota command. If the option "-v" is given a verbose output will follow listing all quotas and not just the ones overlimit.
{{{
repquota -a [-v]
}}}
To view an individual users quota, say fred, use the quota command. If the option "-v" is given a verbose output will follow listing all quotas and not just the ones overlimit.
{{{
quota [-v] fred
}}}
If ~SELinux is enabled the default settings for HTTPD service cause web pages located on partition other than the root "/" partition to fail to load. You get a permission denied error of some sort.
The solution is to either turn off ~SELinux altogether or modify the policy for the HTTPD service. Use the Security Level Configuration tool by calling the command {{{system-config-securitylevel}}} as root.
Either disable ~SELinux or open the Policy for HTTPD and check the box to //Disable ~SELinux protection for httpd daemon//. This change will require a reboot of the machine to take effect so plan accordingly.
Certificates will usually be installed in one of two places, /usr/share/ssl/certs or /usr/local/ssl/certs, with a file name consisting of the server name and a suffix of ".pem". For example,
install the imapd certificate on /usr/share/ssl/certs/imapd.pem and the ipop3d certificate on /usr/share/ssl/certs/ipop3d.pem. These files should be protected 600. Alternatively, you could link imapd.pem and ipop3d.pem to the same file.
The certificate must contain a private key and a certificate. The private key must not be encrypted. The following command to openssl can be used to create a self-signed
certificate with a 10-year expiration.
{{{
openssl req -new -x509 -nodes -out imapd.pem -keyout imapd.pem -days 3650
}}}
(Error code: sec_error_reused_issuer_and_serial)
Apparently there can be a conflict among HP's self signed ilo certs. Only Firefox leaves me no way to say that it's ok, IE allows me to click through it. To resolve this in Firefox do this. Since the ilo certificate is 'self-signed' it is stored in the certificate authorities section, Tools -> Options -> Advanced -> View Certificates -> Authorities. Scroll to the section for ~Hewlett-Packard Company and find your ilo device. Click on it and Delete from the Authorities' tab.
I wanted to change the admin password on my web server through remote desktop (RDP), but ~Ctrl-Alt-Delete always goes to the local computer. To send it to the remote computer use ~Ctrl-Alt-End to achieve the same thing.
!Force a queue run
{{{
sendmail -q
}}}
!Watch your queue
{{{
sendmail -bp
}}}
or
{{{
mailq
}}}
!Quarantining emails
Suppose you need to remove a lot of spam messages from one sender backing up your queue. You'll need to remove those and set them aside to deal with later.
{{{
service sendmail stop
sendmail -qQSPAM -qSuser@fqdn
service sendmail start
}}}
To see those messages
{{{
mailq -qQSPAM
}}}
!Removing those quarantined files
{{{
cd /var/spool/mqueue
LIST=`ls -l hf*|cut -b 3-`
for fs in $LIST
do
\rm hf$fs df$fs
done
}}}
!Moving queued mail to another host
{{{
sendmail -q -OFallBackMXHost=some_other_mta_fqdn
-OTimeout.initial=5s -OTimeout.connect=5s -OTimeout.help=5s
}}}
Smart host forwarding. You can enable this behavior by defining ~SMART_HOST. In a firewall situation, all non local mail can be forwarded to a gateway machine for handling.
{{{
define(`SMART_HOST', `gateway.your.domain')
}}}
Rate-limiting
{{{
dnl # after 5 bad address attempts, throttle this sender --
define(`confBAD_RCPT_THROTTLE',`5')dnl
}}}
Privacy
{{{
define(`confPRIVACY_FLAGS', `authwarnings,novrfy,noexpn,restrictqrun')dnl
}}}
Deal with problems early
{{{
define(`confTO_QUEUEWARN', `2h')dnl
}}}
Stop using mostly firewall blocked ident
{{{
define(`confTO_IDENT', `0')dnl
}}}
Procmail
{{{
define(`PROCMAIL_MAILER_PATH',`/usr/bin/procmail')dnl
FEATURE(local_procmail,`',`procmail -t -Y -a $h -d $u')dnl
}}}
Don;t forget to add this to the end of the mc file
{{{
MAILER(procmail)dnl
}}}
Access database
{{{
FEATURE(`delay_checks')dnl
FEATURE(`access_db',`hash -T<TMPF> -o /etc/mail/access.db')dnl
FEATURE(`blacklist_recipients')dnl
}}}
Anti Spam settings
{{{
FEATURE(`greet_pause', `5000')
FEATURE(`ratecontrol', `nodelay',`terminate')dnl
}}}
Blackhole lists
{{{
FEATURE(`dnsbl', `zen.spamhaus.org', `"550 Email rejected by RBL zen.spamhaus.org"')
FEATURE(`dnsbl', `bl.spamcop.net', `"550 Email rejected by RBL bl.spamcop.net"')
}}}
~ClamAV Milter
{{{
INPUT_MAIL_FILTER(`clamav', `S=local:/var/run/clamav/clmilter.sock, F=, T=S:4m;R:4m;C:30s;E:10m')dnl
define(`confINPUT_MAIL_FILTERS', `clamav')
}}}
~SpamAssassin Milter
{{{
INPUT_MAIL_FILTER(`spamassassin', `S=local:/var/run/spamass.sock, F=, T=C:15m;S:4m;R:4m;E:10m')dnl
define(`confMILTER_MACROS_CONNECT',`t, b, j, _, {daemon_name}, {if_name}, {if_addr}')dnl
}}}
!Professional Services
* Install, manage, maintain and upgrade your Solaris, ~HP-UX, other Unix and Windows systems.
* Install, manage, maintain and upgrade Networking systems. Cisco routers and switch programming available. I can design your network from the ground up or modify it to better suit your needs taking into account all of your security and remote access needs.
* Install, manage, maintain and upgrade most open source software. In particular, Apache, PHP, Perl, SSL, SSH, DNS, Sendmail, Samba and many others.
* Install, manage, maintain and upgrade many commercial software packages as well. I have an extensive background in many EDA tools as well as some PC based products. I can also write my own programs in C.
* Solve problems so you can get on with your business.
!Retainer Programs
There are two retainer programs available. The benefits of which are preferred access to support along with preferred pricing.
|!Retainer|!Benefits|!Cost|
|Minimal|One hour of telephone and/or remote service via Ssh. Service is billed in 15-minute increments.|$100|
|Proactive|Three hours of telephone and/or remote services via Ssh. Proactive installation of available recommended and security patches for the monitored system(s). Service is billed in 15 increments. Up to two unused hours may carry over for one month.|$250|
!Standard Rates
The standard consulting rate is $120 per hour. Those who have a retainer program in place receive the discounted rate of $100 per hour. Further, those with a retainer program who use more than 8 hours of service (in addition to the retainer hours) receive the preferred rate of $80 per hours. Hours used after 6PM are charged at time and a half. Weekends and holidays are double time. One-way travel time is charged. In addition there is a two-hour minimum for those without retainer contracts, one-hour minimum for those with contracts.
!Documents
* [[Consulting Services Offerings|Consulting Services Offerings.pdf]]
* [[Blank Time Sheet|Blank Time Sheet.pdf]]
Make sure that sendmail was compiled with the necessary options. STARTTLS and SASL must be present in the resulting list.
{{{
sendmail -d0.1 -bv
}}}
Get the authentication mechanisms you can can use.
{{{
saslauthd -v
}}}
Edit /etc/sysconfig/saslauthd config file to suit.
{{{
MECH=shadow
}}}
Setup SASL to run on boot and start.
{{{
chkconfig saslauthd on
service saslauthd start
}}}
Test it with an account to verify the sasl auth mechanism is working before spending time on the sendmail interface.
{{{
testsaslauthd -u username -p password
}}}
Now you need to have a SSL cert for your smtp server. Here's how to create a dummy testing cert.
!!!~RHEL4
{{{
cd /usr/local/ssl/certs
make sendmail.pem
}}}
!!!~RHEL5
{{{
cd /etc/pki/tls/certs
make-dummy-cert
}}}
Then you edit up a sendmail.mc file to look for it.
!!!~RHEL4
{{{
define(`confCACERT_PATH',`/usr/share/ssl/certs')
define(`confCACERT',`/usr/share/ssl/certs/ca-bundle.crt')
define(`confSERVER_CERT',`/usr/share/ssl/certs/sendmail.pem')
define(`confSERVER_KEY',`/usr/share/ssl/certs/sendmail.pem')
}}}
!!!~RHEL5
{{{
define(`confCACERT_PATH',`/etc/pki/tls/certs')
define(`confCACERT',`/etc/pki/tls/certs/ca-bundle.crt')
define(`confSERVER_CERT',`/etc/pki/tls/certs/sendmail.pem')
define(`confSERVER_KEY',`/etc/pki/tls/certs/sendmail.pem')
}}}
Now define the auth mechanisms for sendmail
{{{
define(`confAUTH_OPTIONS', `A p y')dnl
TRUST_AUTH_MECH(`LOGIN PLAIN')dnl
define(`confAUTH_MECHANISMS', `LOGIN PLAIN')dnl
}}}
Recompile your cf file and restart sendmail.
{{{
make -C /etc/mail sendmail.cf
service sendmail restart
}}}
Note: Remember to allow the proper smtp ports through your firewall.
[img[The view from home|homescape1.jpg]]
The //finest// in Unix System Administration and Open Source Software installation and support.
After installing ~SpamAssassin I wanted to access it with a milter so I chose spamass-milter http://savannah.nongnu.org/projects/spamass-milt/ and installed v0.3.1 on my redhat linux box. You need the c++ compiler to build the milter.
{{{
./configure
make
make install
}}}
Copy an init startup file "spamass-milter-redhat" to /etc/init.d/spamass-milter. Make it executable and then make sure the following edits in place so it will work with things being under /usr/local as they are normally.
{{{
[ -x /usr/local/sbin/spamass-milter ] || exit 0
PATH=$PATH:/usr/sbin:/usr/local/sbin
}}}
Configure the startup script to run:
{{{
chkconfig --add spamass-milter
chkconfig spamass-milter on
service spamass-milter start
}}}
Now edit sendmail to use your milter by editing your sendmail.mc file and adding the following:
{{{
INPUT_MAIL_FILTER(`spamassassin', `S=local:/var/run/spamass.sock, F=, T=C:15m;S:4m;R:4m;E:10m')dnl
define(`confMILTER_MACROS_CONNECT',`t, b, j, _, {daemon_name}, {if_name}, {if_addr}')dnl
dnl define(`confMILTER_MACROS_HELO',`s, {tls_version}, {cipher}, {cipher_bits}, {cert_subject}, {cert_issuer}')dnl
}}}
Now rebuild the cf file and restart sendmail:
{{{
make -C /etc/mail
service sendmail restart
}}}
To test a message for it's spam score use:
{{{
spamassassin -x -t < filename
}}}
where the filename is a file containing the single message you want to test score. The x option tells spam assassin to not create a local user_prefs file. Don't forget to load the perl module/version that spamassassin is installed in!
If you would like to test some rules from an alternate location before installing them site wide use this option:
{{{
--siteconfigpath=/path_to_rules
}}}
To verify your custom rules you have written don't have errors in them:
{{{
spamassassin --lint
}}}
or:
{{{
spamassassin --lint -D
}}}
To verify the version:
{{{
spamassassin -V
}}}
Some procmailrc examples. First calling spamassassin through procmail. Note the explicit loc to limit it to one spamassassin process per user. Note the message limit of 256KB in size since most spam is a few KB.
{{{
:0fw: spamassassin.lock
* < 256000
| spamassassin
}}}
Refile based on spam level:
{{{
:0:
* ^X-Spam-Level: \*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
almost-certainly-spam
}}}
Refile based on overall thumbs up or down status:
{{{
:0:
* ^X-Spam-Status: Yes
probably-spam
}}}
.firstletter{ width:0.75em; font-size:300%; font-family:times,arial;
line-height:80%; margin-left:2em; }
.indent {margin-left:3em;}
.textright{text-align:right;}
.borderless, .borderless table, .borderless td, .borderless tr, .borderless th, .borderless tbody { text-align:right; border:0 !important; margin:0 !important; padding:2px !important; margin-left: auto !important; margin-right: auto !important;}
Prior to widespread encryption it was common to test SMTP, POP and IMAP servers using a simple telnet connection to the corresponding port (25, 110 and 143) and interacting directly with the server via text commands. This was simple, clean and easy to do without need of configuring any mail clients just for testing. With the advent of encryption (TLS or SSL) you need to use openssl to connect to the service in order to manually issue your testing commands.
For example to connect to a ~POP3 server that uses SSL (~POP3S) we call the following command:
{{{
openssl s_client -connect localhost:995
}}}
Make sure you replace localhost for the server address and if you use a non-standard port (995) then change it as well. This command will connect to the ~POP3 server and handle all the SSL stuff for you. You will see messages about certificates and SSL handshakes. After all the SSL messages you will be greeted by the ~POP3 server message. You can then start interacting with the server as you would using telnet. The standard encrypted ports for SMTP, POP and IMAP are 587, 995 and 993 respectively.
Here's a basic SMTP (Simple Mail Transport Protocol) conversation. The data we send is in bold:
% ''telnet example.com 25'' -- connect to the SMTP port on example.com
Trying 192.168.1.10 ...
Connected to example.com.
Escape character is '^]'.
220 mailhub.example.com ESMTP Sendmail 8.9.1a/8.9.1; Sun, 11 Apr 1999 15:32:16 -0400 (EDT)
''HELO client.example.com'' -- identify the machine we are connecting from
(can also use EHLO)
250 mailhub.example.com Hello dnb@client.example.com [192.168.1.11], pleased to meet you
''MAIL FROM: <dnb@example.com>'' -- specify the sender
250 <dnb@example.com>... Sender ok
''RCPT TO: <dnb@example.com>'' -- specify the recipient
250 <dnb@example.com>... Recipient ok
''DATA'' -- begin to send message, note we send several key header lines
354 Enter mail, end with "." on a line by itself
''From: David N. Blank (David N. Blank)
To: dnb@example.com
Subject: SMTP is a fine protocol
Just wanted to drop myself a note to remind myself how much I love SMTP.
Peace,
dNb
.'' -- finish sending the message
250 ~PAA26624 Message accepted for delivery
''QUIT'' -- end the session
221 mailhub.example.com closing connection
Connection closed by foreign host.
<script>
document.write("The current date/time is: "+(new Date())+"<br>");
return "link to current user: [["+config.options.txtUserName+"]]\n";
</script>
The limit is 137GB of storage before users of Windows XP without ~SP1 hit a glass ceiling known as atapi.sys. And even those with service pack 1 are advised to seek out a "hot fix". Use the Knowledge Base at http://support.microsoft.com/ and search for article 303013.
[[Putty: a free telnet/ssh client|http://www.chiark.greenend.org.uk/~sgtatham/putty/]]
[[testssl.sh Testing TLS/SSL encryption|http://testssl.sh]]
You can configure Sendmail to use a different or multiple aliases file, the service will be using '/etc/aliases' by default on Linux (/etc/mail/aliases on Solaris). You will have to define the '~ALIAS_FILE' directive in the sendmail config: file '/etc/mail/sendmail.mc'.
Your configuration file will have the following default value set:
>define(`~ALIAS_FILE', `/etc/aliases')dnl
If you want Sendmail to use the NIS database, you will have to specify it as follows:
>define(`~ALIAS_FILE',`nis:mail.aliases@nisdomainname')dnl
Substitute your nis domain name here.
Rebuild the Sendmail database by running make:
>make -C /etc/mail
Restart the Sendmail:
> service sendmail restart
Edit the '/etc/nsswitch.conf' file:
> aliases: files
To:
> aliases: nis files
There is a lot of info regarding the use of free SSL certificates from http://www.startssl.com/ to protect and encrypt an Apache webserver. However these certificates can protect any kind of service that needs an SSL certificate as well. There just isn't a lot of documentation on doing it in practice. Here's how to protect your mail service with real certificates instead of having to use self signed certificates. Follow the instructions in Fun with openssl certificate generation to acquire certificates for imap.fqdn and smtp.fqdn just as you did for www.fqdn. Once you have your certificates back you'll need to install them and configure your applications. Remember to download the intermediary certificate called sub.class1.server.ca.pem as you'll need it to have a valid cert chain.
To set up Sendmail I modified these lines in my mc file.
{{{
define(`confSERVER_CERT', `/etc/pki/tls/certs/smtp.fqdn.crt')dnl
define(`confSERVER_KEY', `/etc/pki/tls/private/smtp.fqdn.key')dnl
}}}
Two important things in order for this to work. First, to get a proper cert file for Sendmail you'll need to have a file that contains the certificate you received followed by their intermediate certificate. Second, the private key file needs to be unencrypted.
To setup Panda Imap (UW Imap) you'll need to do something like above but files need to be called something specific as the application has them hard coded to specific names. Use imapd.pem for both the unencrypted private key and the certificate (with intermediate certificate included as before) in the standard location for openssl (/etc/pki/tls/private and /etc/pki/tls/certs respectively).
For both, make sure you have the permissions right. The private key should be 600 and the certificate should be 644 permissions.
To test the smtp (and smtps) service use
{{{
openssl s_client -connect localhost:587 -starttls smtp
openssl s_client -connect localhost:465
}}}
To test the imap (and imaps) service use
{{{
openssl s_client -connect localhost:143 -starttls imap
openssl s_client -connect localhost:993
}}}
I like to use rsync to archive stuff to another location. My shorthand approach is
{{{
rsync -avz --delete --progress --stats /source /destination
}}}
I like to see progress and stats when done. Using the delete option eliminates any files in the destination that were removed from the source otherwise it would grow unchecked.
Sometimes it's helpful, especially when copying a entire partition in Linux or ~NetApp, to exclude some housecleaning directories.
{{{
rsynce -avz --delete --exclude lost+found --exclude .snapshot --progress --stats /source /destination
}}}
You can configure the rsync service in Linux by editing the /etc/xinitd.d/rsync file
{{{
disable = no
}}}
And then edit /etc/rsyncd.conf to populate some targets
{{{
[home]
path = /home
uid = root
}}}
This allows file transfer from the home sub directory with permissions as if you were the root user otherwise you would be treated as user nobody. Enable this with the following
{{{
service xinitd restart
}}}
Then you can use this in an rsync command to send or receive files from this host.
{{{
rsync -avz rsync://thor/home/widget /widget
}}}
This will retrieve the widget sub directory under /home from the host thor and put them in a directory widget on your local host.
Sometimes you may not want to run fsck on large volumes you have mounted just to flip a kernel. A lot of times these long duration fsck runs need to be scheduled in advance. To figure out whether this is going to bite you when you reboot you can check it and make adjustments using tune2fs.
{{{
tune2fs -l /dev/sda1
}}}
Which results in this output
{{{
tune2fs 1.35 (28-Feb-2004)
Filesystem volume name: /export/disk1
Last mounted on: <not available>
Filesystem UUID: de5392bb-bfb3-4087-af71-7012ada54cda
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal resize_inode filetype needs_recovery sparse_super large_file
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 268435456
Block count: 536870912
Reserved block count: 26843545
Free blocks: 174194677
Free inodes: 267494286
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 896
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 16384
Inode blocks per group: 512
Filesystem created: Fri Mar 30 21:03:51 2007
Last mount time: Tue Apr 24 13:56:59 2012
Last write time: Wed Apr 25 10:48:40 2012
Mount count: 5
Maximum mount count: 37
Last checked: Fri Jul 2 23:28:10 2010
Check interval: 15552000 (6 months)
Next check after: Wed Dec 29 22:28:10 2010
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 128
Journal inode: 8
Default directory hash: tea
Directory Hash Seed: 0dfcc9df-b4c0-4076-87fb-3a67dc8ac206
Journal backup: inode blocks
}}}
There are two things that will cause a check to happen. The number of mounts and the interval between checks. Looking at "Mount count" and compare to "Maximum mount count" to determine if this will force an fsck on reboot. Looking at "Next check after" will tell you if you've reached the date that the check interval will force an fsck at reboot. For me it's always been the check interval since my machines tend to stay up and mounted all the time. To disable checks so you can flip a kernel you can turn the interval check off.
{{{
tune2fs -i 0 /dev/sda1
}}}
Turn it back on afterwards
{{{
tune2fs -i 6m /dev/sda1
}}}
Apparently it's well known to be broken so once I updated to 3.6 I was out of luck. ~VMware doesn't seem to be in a hurry to fix it since it's been a year or longer with no update. No FAQ either which they should have. There is a solution though. Use the free ~VMware vSphere client to access your ~VMware Server 2 environment instead of the supplied web browser plugin. Just specify
{{{
localhost:8333
}}}
as the name of the vCenter server. Problem solved.
There's a useful FAQ here http://www.vmware.com/products/server/faqs.html
Ssh to the linux host and run
{{{
vncserver :1 -depth 24
}}}
Many of the cad tools we use need the full 24 bit color mode and don't play well with the default 16 bits of color. You may wish to use a different display number other than 1. You will need to know this when you configure your ssh tunnel and / or viewer.
@@Note:@@ the first time you run this command you will be prompted to create a vnc password to control access to this display. It isn't very secure so don't use your linux login password. Alternatively you can use
{{{
vncpasswd
}}}
to set it up in advance or to change it later.
To get rid of this display server instance use
{{{
vncserver -kill :1
}}}
@@Note:@@ The display number used corresponds to the VNC port number the viewer connects to. So for display 1 that corresponds to port 5901 and so on. You'll need this to tunnel a VNC display through ssh.
!Ssh Tunnel
On your windows machine start Putty and then create a Session to Save. This example uses @@gateway.widgets.com@@ as the ssh gateway into the firewalled network to connect to a different host in the internal network called @@thunder@@ to create a tunnel to, adjust accordingly. Note that we use the IP address of @@thunder@@ instead of the actual hostname since we won't get DNS resolution of the internal network. We are using the port information from our earlier vncserver setup.
Fill in the following...
{{{
Host Name => gateway.widgets.com
Under Connection->SSH->X11 Check the "Enable X11 forwarding"
Under Connection->SSH->Tunnels
Source Port field => 5901
Destination => 192.168.1.7:5901
Click the Add button
Go to Session Category and fill in a name under Saved Session and Click Save.
}}}
@@Note:@@ The source port will be used on your PC's end to connect to by the VNC viewer. The destination is the IP address and VNC display port used by that server.
Click Open to connect and login using your ssh credentials. You can now run your vnc viewer on your local windows pc and tell the viewer to use
{{{
localhost:01
}}}
Supply the vnc password when asked. The viewer I use it ~TightVNC and can be found here http://www.tightvnc.com/
Recently I ran yum update on a machine (~CentOS 4.8) and had this error pop up
{{{
--> Finished Dependency Resolution
Error: Missing Dependency: libgcj4 = 4.1.2-42.EL4 is needed by package libgcj4-src
Error: Missing Dependency: libmudflap = 4.1.2-42.EL4 is needed by package libmudflap-devel
}}}
I did a little google research and finally came up with the answer. The packages had somehow become duplicated and the duplicate instances of the packages was causing the error. The solution was to remove the duplicate packages and then run update.
{{{
yum remove libgcj4-src
yum remove libmudflap-devel
}}}
Followed by
{{{
yum update
}}}
Other useful commands
{{{
rpm -e package_name
rpm --rebuilddb
}}}
Here's another one of those great conflicts which can happen on updating a machine
{{{
Error: nss conflicts with prelink <= 0.3.3-0.EL4
}}}
I did some more Google research and found on the ~CentOS website more clues and more useful ways to debug yum update problems in general. First list your troublesome packages because you probably have two versions of a package installed due to some past failed cleanup. Depending on the history of your system, you may find all sorts of odd things in that listing.
{{{
yum list nss prelink
}}}
Here's the output
{{{
Setting up repositories
Reading repository metadata in from local files
Installed Packages
nss.x86_64 3.12.3.99.3-1.el4_8.2 installed
nss.i386 3.12.3.99.3-1.el4_8.2 installed
prelink.x86_64 0.3.3-1.EL4 installed
prelink.x86_64 0.3.3-0.EL4 installed
Available Packages
nss.i386 3.12.6-1.el4_8 update
nss.x86_64 3.12.6-1.el4_8 update
}}}
What has probably happened is an incomplete install of the new version so I deleted it from the rpm db doing this and then I install it again
{{{
rpm -e --justdb --nodeps prelink-0.3.3-1.EL4
yum update prelink
}}}
After that I do an update for everything and things were fine.