Apache HTTPD – show full length file names in directory listings

By default, Apache HTTPD shows truncated filenames when displaying directory listings.

CentOS Linux release 7.6.1810 (Core), httpd.x86_64, 2.4.6-88.el7.centos, @base

On this version of Apache HTTPD, the solution for showing full filenames is to
nano /etc/httpd/conf.d/autoindex.conf

For a default/standard installation, somewhere near line 16, the file should contain:

IndexOptions FancyIndexing HTMLTable VersionSort

Append “NameWidth=* ” to that line, the result should be:
IndexOptions FancyIndexing HTMLTable VersionSort NameWidth=*

and then systemctl restart httpd to pick up the new config.


As indicated above, my server is using the CentOS Base REPO package for Apache HTTPD, if you’ve installed some other package source, your config might vary.  A lot of web posts refer to a “.htaccess” file.  My server is an internal yum reposync / http package server for my internal network.  It doesn’t even contain an “.htaccess” file.  From time to time I use a browser to look thru the repo packages, but truncated filenames make that kinda useless.  So now it works.

The other options make the Name, Last Modified, and Size column headers sortable.  There’s no indication of that in browser windows, but just click on the column name and it sorts. Much easier to locate big useless font packages for addition to the yum exclude filter.

Open Source software pitfalls; why you should test in a very restricted VM:

TL;DR: a free open source software package with buried install routine that tries to AWS Cloud Formation a server cluster… would incur ~$22K/month.


In recent weeks, I’ve been reviewing dozens of open source applications. Some of the stuff I’ve found deeper in the details has been truly amazing.

The best one so far has been an open source map rendering application. The concept is pretty straightforward… it provides a client app and a server app. On the client, a user creates a set of points (coordinates), and the server renders a map for the client.

The client/server model should enable multiple users to utilize low powered mobile devices to gather data and let one server do the heavier processing.

Sounds good; I added it to my “evaluations” list.

My primary goal for these evaluation activities is not to install/run these apps, but rather to collect some apps that I can use as working examples for my own software development “continuing education” projects. I learn more by taking things apart.

After some preliminary reviews, fiddling around with a live online demo site, and using some packaged bitnami/docker type bundles… I git cloned the source code and started exploring.

As I followed a few threads of packages dependencies, I stumbled into a package of utilities and scripts… best described as “deploy an n-tiered high capacity solution.”

Notice that I found this in package dependencies and scripts.

Many app developers provide this kind of info in documentation. Not these folks. It’s deep in the installation/deployment code published in their GitHub repo.

So what happens when someone downloads and deploys the content of their GitHub repo. For most people, I’d expect the process to fail rather quickly.

However, if attempted on a host that is used to admin/deploy stuff on AWS… better hope someone configured AWS roles and account limits/restrictions.

If you’ve ever setup “AWS CLI” to make it easier to administer AWS services… and then ran the command line “aws configure” to set up your keys… well… you should be getting a feeling that a train wreck is leaving the station.

That client/server deployment process buried in the GitHub repo… it contains cloud formation scripts to deploy a “high capacity rendering cluster”.

After I reviewed the target configuration, and looked up the current AWS pricing, I estimated the services turned on would have a monthly cost of about $22K.


That isn’t the only open source repo I’ve found that hides a narly surprise, but it’s currently the top of my list.

A couple others, that take somewhat smaller bite, but nevertheless demonstrate why caution is necessary.

A package that is described as a howto guide walks the user through steps to set up a free Azure Functions demonstration… at the completion, the user account is provisioned with a persistent (always on) service that prices out as $1,803 per month. Unfortunately, the Microsoft Azure admin interface does nothing to warn the user or even ask permission. I only found that little gift by thoroughly reviewing all of the created services afterwards… and promptly deleting them. Had something external interrupted me, and I’d delayed that post config review, the unexpected bill would have been a real kick in the backside.

In another open source git repo, I found what appeared to be a promising little utility for scraping news sites and parsing the results into simple/clean text saved locally for offline reading. Turns out that streamlined reading material comes at a cost. The app’s install routine included a “pip install” of numpy, matplotlib, scipy, and tensorflow. Congratulations to anyone who installed this one… you just donated your machine to the authors machine learning network.


If you’re thinking , “Gee, Wally how many people actually make these kinda mistakes? Don’t detect them? And, don’t clean them up?”

Google search “wasted cloud spend”.

An article published Jan 3rd, 2019 puts the current cloud waste at $14.1 Billion.

Build a CentOS7 server for: pxe boot, kickstart, reposync, repotrack, nfs, https (SIDEBAR 3)

SIDEBAR 3 – PXE client note re memory:  the boot image uses a ramdisk.
Many of my VMs are headless servers, for application software that (when under a light load) will run fine under 512MB of Memory.  However, PXE Boot Images and the Install Image that runs the Anaconda Installer load a virtual ramdisk into memory.  With each Distro Release, it seems the size of that ramdisk grows.
The workaround is to allocate the VM 1536MB of vRAM for the installation phase. And then, post installation revisit the VM Settings to reduce the vRAM allocation to 512 MB.
2018-12: The updated CentOS “1810” boot images require 1,536 of vRAM for PXE client to boot  and successfully run the Anaconda installer. After install is complete, reducing the VM allocation to 512MB Memory is ok.
2018-10-04: PXE clients fail if less than 1,248MB of vRAM, (the boot image uses a ramdisk).  After install is complete, reducing the VM allocation to 512MB Memory is ok.

Build a CentOS7 server for: pxe boot, kickstart, reposync, repotrack, nfs, https (SIDEBAR 2)

SIDEBAR 2 – Optional NFS SHARE: convenient for exploring repo contents from a gui desktop VM.

I only, occasionally, use this to either find stuff in the local repos to filter out, or to verify information about packages in the local repos. After a reposync+filter has been established, and you’ve refined the filter it to your needs, the NFS share becomes less useful.
 requires “yum install nfs-utils”, if not already installed.
 Add firewall rule to allow NFS from local network.
 edit “/etc/exports” to share “/var/www/html/repos/”
nano /etc/exports
 /var/www/html/repos/ 10.0.0.0/24(ro)
 start and enable the NFS service:
systemctl start rpcbind nfs-server
systemctl enable rpcbind nfs-server
On the CLIENT side:
yum install nfs-utils
mkdir /mnt/nfs
 mount -t nfs -o ro,nosuid {NFS-ServerName}.local:/ /mnt/nfs/
mount -t nfs -o ro,nosuid c7pxe.local:/ /mnt/nfs/
nano /etc/fstab
 {NFS-ServerName}.local:/ /mnt/nfs/ nfs ro,nosuid 0 0
c7pxe.local:/ /mnt/nfs/ nfs ro,nosuid 0 0

NOTE: there is a problem with NFS clients, that if the target server (source) of a shared NFS mount is offline when the client tries to shutdown, the client can, and usually does, hang for some period of time trying to contact the NFS server and “gracefully” close the connection (even though there isn’t a connection to close).  [ I couldn’t make this up. ]

Since I don’t need NFS for anything else in this lab environment, I only briefly looked into workarounds and decided to just leave the NFS mount out of FSTAB on all but one client VM.

Build a CentOS7 server for: pxe boot, kickstart, reposync, repotrack, nfs, https (SIDEBAR 1)

SIDEBAR 1 – Alternate ways to provide PXE BOOT IMAGES to clients (a summary):

Like most things, there are plenty of other ways to provide PXE BOOT IMAGES to clients.
This section is not intended as fully detailed guide, but rather to offer some ideas of how to approach other PXE/Kickstart scenarios.
Here are a some approaches you may find useful:

Downloaded ISOs can be mounted under /var/www/html/repos/{version}/{}/ and served to NetInstall clients by HTTP (or NFS).

For example, if using a downloaded “CentOS-7-x86_64-Everything-1810.iso”
  • mkdir /var/www/html/repos/c7x64/ISOeverything/
  • mount /dev/cdrom/ /var/www/html/repos/c7x64/ISOeverything/
If this is done after the above steps (install/config httpd/PXE/etc), ISOeverything should be immediately available to network clients.
If a VM is booted from a netinstall.iso (not PXE), when the install screen asks for source media, enter the local URL for the server hosting “ISOeverything”… “
  • http://{ip-address}/repos/c7x64/ISOeverything

 The files needed by PXE clients can be downloaded from the internet and placed into the target directories, bypassing the need to download ISOs at all.  Now that you’ve seen a couple sets of PXE Boot image files, you can figure out which equivalent files to download directly.
For CentOS and Fedora, the online repos usually follow a naming/pathing convention like these examples:
   (live URLs as the time of this writing)
  •  http://mirror.centos.org/centos/7/os/x86_64/
  •  http://mirror.centos.org/altarch/7/os/i386/
  •  http://mirror.centos.org/centos/6/os/x86_64/
  •  http://mirror.centos.org/centos/6/os/i386/
  •  https://mirrors.kernel.org/fedora/releases/29/Everything/x86_64/os/
For RedHat (RHEL) and Oracle, their subscription managers add a little bit of complication, but the pattern is essentially the same.  One of the simplest approaches for RHEL is to:
  • use a subscribed node to run a reposync script against the desired repos,
  • cp the PXE boot/install images from a downloaded ISO (matching the ReleaseVersion/Arch), compare the ISO location of the boot images to their online repos and find the correct URL path/pattern for future use.
  • use a Kickstart ” %POST ” script for newly installed PXE/KS clients to join the subscription mgr.

To provide a local PXE/Kickstart for Fedora 29, refer to these steps as a starting point.
note: across the family of CentOS/Fedora/RHEL/Oracle… these boot/install images vary by release, version, iso-version, etc, etc… sometimes it is necessary to do a little reading, and try more than one file to find the ones that provide a successful PXE/Kickstart in your environment.
Download just the boot images. Then,using previous sections as a template, create a filtered yum config and a reposync script to get just the packages needed to install your desired config.

 

 

  • nano /var/lib/tftpboot/pxelinux.cfg/default   # add menu item(s) for booting Fedora.
  • nano /var/www/html/repos/f29.ks                   # make/edit kickstart file for Fedora install.

Build a CentOS7 server for: pxe boot, kickstart, reposync, repotrack, nfs, https (STEP 14)

STEP 14 – Test PXE Boot and Kickstart installation.

Just create a new VM instance, and don’t provide it with any installation media.
Of course it will need a vdisk for the installation to work, use ~6 or 8gb set as NVMe.
For headless servers, there usually isn’t any need for Bluetooth, Sound, 3D Video, or a Printer Port. I remove all of those from VM hardware profile.
The PXE/Kickstart install image the clients will boot utilizes a ramdisk. In current/recent versions of CentOS, Fedora, RedHat, and Oracle linux clients need 1,536 MB of vRAM to load this installation image.  As soon as the installation is completed, and the VM is capable from using it’s own disk, then the VM hardware memory allocation can be reduced… for many of my server VMs, I set it at 512MB.
1 vCPU is adequate.
Of course it will need a virtual network interface configured on the same VMNET as Fusion is providing DHCP with the PXE (“next-server”) option.
That’s it… start the VM.
If it works, there will be a lot of scrolling text… then eventually a prompt to quit/reboot… and you’ll have a working VM.
If something goes wrong, watch the screens, it’ll provide pretty good clues.  There are also methods to access (tail) the installation logs… but I’ll leave you to read up on that.  Most of the problems with relatively simple PXE/Kickstart setups like this are due to typos in the *.ks script or the “default” pxe boot menu.

 

Build a CentOS7 server for: pxe boot, kickstart, reposync, repotrack, nfs, https (STEP 13)

STEP 13 – Provide PXE boot server info to DHCP clients, via VMware Fusion vnet config (not a CentOS DHCP server).

This config is on the VMware host.  In my case, that’s a MacOS Mojave MacBook Pro running VMware Fusion. Any recent VMware hypervisors (Fusion, Workstation ESXi) are capable of providing this. VirtualBox and Parallels can to.  This scope of this guide is staying with VMware Fusion on MacOS.

Fusion doesn’t provide a GUI interface for the DHCP PXE Boot Server option. But it does support a lot of additional features through config files and/or the command line.
For this step, open a MacOS Terminal window, and:
sudo su
nano /Library/Preferences/VMware\ Fusion/vmnet2/dhcpd.conf
Put this after the “DO NOT MODIFY” section of stuff… it’s “reimplementing the subnet”…
note: on the PXE Boot Server, the “pxelinux.0″ file can be put in a subfolder, and then referenced in the DHCP config with this syntax ”  filename “pxelinux/pxelinux.0”;
My PXE server is providing the pxelinux.0 file at the default root of the tftpserver.
           the vnet dhcp config is a little less than obvious…
           the PXE Boot TFTP Server is represented by:  “next-server 10.0.0.11”

subnet 10.0.0.0 netmask 255.255.255.0 {
range 10.0.0.128 10.0.0.254;
option broadcast-address 10.0.0.255;
option domain-name-servers 10.0.0.2;
option domain-name localdomain;
default-lease-time 1800;                # default is 30 minutes
max-lease-time 7200;                    # default is 2 hours
option netbios-name-servers 10.0.0.2;
option routers 10.0.0.2;
next-server 10.0.0.11;  
  filename “pxelinux.0”;
}
host vmnet2 {
hardware ethernet 00:55:55:C0:22:22;
fixed-address 10.0.0.1;
option domain-name-servers 0.0.0.0;
option domain-name “”;
option routers 0.0.0.0;
}

* for simplicity, this VMNET config uses an entire class c range (private/non-routable of course), and then allocates the bottom half for static IP and lets the DHCP process serve the top half.


TO RESTART FUSION DHCP SERVICE: without shutting down/restarting VMs/Fusion
(2019-02-22):
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli –stop   
sudo /Applications/VMware\ Fusion.app/Contents/Library/vmnet-cli –start

Build a CentOS7 server for: pxe boot, kickstart, reposync, repotrack, nfs, https (STEP 12)

STEP 12 – Put the required PXE client boot files in place.

We’re going to make a folder structure that supports two Distro/Release/Arch versions to start with, and can easily be updated with additional versions down the road.
mkdir /var/lib/tftpboot/CentOS7x64/
mkdir /var/lib/tftpboot/CentOS7x32/
Now, to copy the minimum boot files over to the tftpserver… we need to mount an installation media ISO… such as “CentOS-7-x86_64-NetInstall-1810.iso” for CentOS 7 64-bit, and cp the boot files to the tftboot directories.
The exact details of mounting media (ISOs) can vary…
I’m doing this on VMware Fusion, and the vm has open-vm-tools installed and active, so, my method is to use vmware to choose and connect the disc image to the VM, then, within the vm:
mount /dev/cdrom/ /mnt/cdrom/
cd /mnt/cdrom/isolinux/   # purely optional, just to see what’s there.
ls -l /mnt/cdrom/isolinux/    # purely optional, just to see what’s there.
#
cp /mnt/cdrom/isolinux/{vmlinuz,initrd.img,splash.png} /var/lib/tftpboot/CentOS7x64/
If you’re running a GUI Desktop (GNOME) on your VM, it might auto-mount the cd-rom, and this might be your command line option for copying the files:
# cp /run/media/{username}/CentOS\ 7\ x86_64/isolinux/{vmlinuz,initrd.img,splash.png} /var/lib/tftpboot/CentOS7x64/
# cp /run/media/elmer/CentOS\ 7\ x86_64/isolinux/{vmlinuz,initrd.img,splash.png} /var/lib/tftpboot/CentOS7x64/
#
cd /var/lib/tftpboot/CentOS7x64/ # verify cp results.
Now, we also need to copy “LiveOS” from that bootable iso:
mkdir /var/www/html/repos/c7x64/base/LiveOS
mkdir /var/www/html/repos/c7x32/base/LiveOS
NOTE: in the section “Create a PXE BOOT MENU”, the menu provides different versions of vmlinuz + initrd.img,
  • this is how/where those are provided to the PXE/Kickstart clients.
  • Each targeted Distro/Release/Arch requires a matching “LiveOS” be provided.
  • When the client node boots into this image, this is what runs the Anaconda installer (and processes the kickstart script).
  • cp /mnt/cdrom/LiveOS/* /var/www/html/repos/c7x64/base/LiveOS/
  • # OR: cp /run/media/{username}/CentOS\ 7\ x86_64/LiveOS/* /var/www/html/repos/c7x64/base/LiveOS/
  • # cp /run/media/elmer/CentOS\ 7\ x86_64/LiveOS/* /var/www/html/repos/c7x64/base/LiveOS/

Now, switch to the the 32 bit ISO and cp those files as well:

  • umount /dev/cdrom
For VMware Fusion: choose/connect a CentOS 7 32-bit ISO (like CentOS-7-i386-NetInstall-1810.iso)
  • mount /dev/cdrom/ /mnt/cdrom/
  • cp /mnt/cdrom/isolinux/{vmlinuz,initrd.img,splash.png} /var/lib/tftpboot/CentOS7x32/
  • cp /mnt/cdrom/LiveOS/* /var/www/html/repos/c7x32/base/LiveOS/

Build a CentOS7 server for: pxe boot, kickstart, reposync, repotrack, nfs, https (STEP 11)

STEP 11 – create the kickstart files referenced by the PXE Boot menu:
Here is one of my files. Use it as a template. Copy/paste/edit as needed.
# file = lab1x64.ks

# version=DEVEL
# ###############################################
# 2019-03-22: Kickstart script for client “c7lab1.lab.domain.net c7lab1.local c7lab1”.
#             Serve “lab1x64.ks” at ks=http://10.0.0.11/repos/lab1x64.ks
#             Client VM uses DISK TYPE = NVMe.
#             This ks successfully omits “dracut rescue images” from “/boot”.
#             Also omits a lot of other package bloat that a Virtual Server doesn’t need.
#
# If you want a different kickstart config, you’ll need to research the options.
# One way to get a good example config is to manually do an install with the
# options you want.  Then, on the resulting system, look in “/root/anaconda-ks.cfg”
# and use that as your kickstart template.
# ###############################################
firewall –enabled –service=ssh –service=mdns
selinux –permissive
# System authorization information
auth –enableshadow –passalgo=sha512
# ###############################################
repo –name=updates –baseurl=http://10.0.0.11/repos/c7x64/updates/
repo –name=epel –baseurl=http://10.0.0.11/repos/c7x64/epel/
repo –name=extras –baseurl=http://10.0.0.11/repos/c7x64/extras/
# ###############################################
# Use text mode install
text
# Do not configure the X Window System
skipx
# ###############################################
# Run the Setup Agent on first boot
firstboot –enable
# Keyboard layouts
keyboard –vckeymap=us –xlayouts=’us’
# System language
lang en_US.UTF-8
#
# NETWORK information
network  –bootproto=dhcp –device=ens33 –noipv6 –activate –hostname=c7lab1.lab.domain.net
# Root password
rootpw –iscrypted $6$iBFA4yWORTlm1Dnt$zPYZ.ArpJiPQQ8DKrtx8J.kaiIUHpCXxhPBN85smQBHwCtLr8u2tQEa3P.fXrKHiWRZ6qnTseZNDsi78Sk/0H1
# note: the plaintext of this password is “elmer”.
# DONT USE THAT.
# Choose your own password, use this terminal command to hash it, and paste output back here.
# python -c ‘import crypt,getpass;pw=getpass.getpass();print(crypt.crypt(pw) if (pw==getpass.getpass(“Confirm: “)) else exit())’
#
# System services
services –enabled=sshd
#
# System timezone
timezone America/Chicago –isUtc
#
user –groups=wheel –name=elmer –password=$6$iBFA4yWORTlm1Dnt$zPYZ.ArpJiPQQ8DKrtx8J.kaiIUHpCXxhPBN85smQBHwCtLr8u2tQEa3P.fXrKHiWRZ6qnTseZNDsi78Sk/0H1 –iscrypted –gecos=”elmer”
#
# ###############################################
ignoredisk –only-use=nvme0n1   # use this if VM DISK TYPE = NVMe
# System bootloader configuration
bootloader –location=mbr –boot-drive=nvme0n1  # use this if VM DISK TYPE = NVMe
#
# Partition clearing information
clearpart –none –initlabel
#
# I’ve chosen to allocate 512 MiB to “/boot”, and automatically allocate all remaining space to “/”.
# Disk partitioning information:
part /boot –fstype=”xfs” –ondisk=nvme0n1 –size=512 # if VM DISK TYPE = NVMe
part pv.252 –fstype=”lvmpv” –ondisk=nvme0n1 –size 1 –grow # if VM DISK TYPE = NVMe
#
volgroup centos –pesize=4096 pv.252
logvol /  –fstype=”xfs” –name=root –vgname=centos –percent=100      # auto allocate remaining space to “/”
# ###############################################
# Selecting and excluding packages is often a “trial and error” endeavor.
# If you haven’t been down this rabbit hole before, you’ll be surprised by
# some of the unexpected dependencies between packages,
# that usually should not have any interdependencies at all.
#
%packages –instLangs=en_US.utf8 –ignoremissing –excludedocs
@core –nodefaults
# ###############################################
# my list of frequently used packages:
epel-release   # extras#
yum-utils        # base  # installs 337k
deltarpm        # base  # installs 209k
nano               # base # downloads 440k, installs 1.6M
nss-mdns       # EPEL  # installs 131K
htop               # EPEL # installs 281K
rng-tools       # base # downloads 49k, installs 102k
#
# ip address, nmtui, top       # base # included with @core.
# make, gzip, tar, curl           # base # included with @core.
# open-vm-tools                     # base # installed with @core, provides vmware-hgfsclient, vmhgfs-fuse, vmware-toolbox-cmd.
# ###############################################
# firmware packages to exclude:
-aic*-firmware
-alsa*
-atm*-firmware
-b43-openfwwf
-bfa-firmware
-fprintd-pam
-intltool
-ipw*-firmware
-ivtv* # skips a set of big video packages
-iwl*-firmware # skips a lot of unecessary firmware packages (mostly intel wifi).
-libertas* # skips a lot of unecessary firmware packages.
-linux-firmware # note: the installer will ignore this one; so remove it in POST.
-ql2100-firmware
-ql2200-firmware
-ql23xx-firmware
-ql2400-firmware
-ql2500-firmware
-rt61pci-firmware
-rt73usb-firmware
-xorg-x11-drv-ati-firmware
-zd1211-firmware
# ###############################################
# some more exclusions
-centos-logos           # try it… saves 22MB; unfortunately there are a lot of apps (like httpd that pull it in).
-crontabs
-dracut-config-rescue   # This saves a lot of space “/boot/”
# -GeoIP # looking up IP/Country isn’t something I need on these VMs, but “dhclient” requires it.
-iprutils
-kernel-tools
-libteam                # this is for “network interface teaming”, not something I need.
-man-db               # Useful, but I don’t need it on every VM in the fleet.
-mozjs17                # seems weird to have a javascript package on a baseline headless server.
-NetworkManager-team    # this is for “network interface teaming”, not something I need.
-newt-python                       # part of a set of packages that do GUI things.
-openssh-clients                 # these server VM instances do NOT need to make outbound SSH client connections.
-plymouth                            # this is the “pretty” boot screen, serves no purpose on a headless VM.
-plymouth-core-libs           #
-postfix                                # an email server.
-sg3_utils                   # related to SCSI devices, which this VM hardware profile does not have.
-sg3_utils-libs           #
-snappy                      # a compression utility, one of many, and not one of the best.
# -wpa_supplicant       # seems dumb to have this for a system that can’t do wifi; but NetworkManager and NMTUI require it.
# ###############################################
%end
# ###############################################
#
#
# ###############################################
# ADDON section of KICKSTART SCRIPT:
%addon com_redhat_kdump –disable –reserve-mb=’auto’
%end
# ###############################################
# “post” section of KICKSTART SCRIPT:
%post –log=/root/ks-post.log
#
# enable the vmware shared folders (makes them available on 1st boot):
mkdir /mnt/hgfs
echo “” >> /etc/fstab
echo “# enable vmware shared folders: ” >> /etc/fstab
echo “.host:/ /mnt/hgfs fuse.vmhgfs-fuse allow_other 0 0” >> /etc/fstab
echo ” ” >> /etc/fstab
# setup the c7pxe YUM REPO config:
rm -f /etc/yum.repos.d/*
curl -o /etc/yum.repos.d/c7x64.repo http://10.0.0.11/repos/client-files/c7x64.repo
# copy standard scripts into $HOME:
curl -o /home/elmer/shrink-disk.sh http://10.0.0.11/repos/client-files/shrink-disk.sh
curl -o /home/elmer/yum-clean.sh http://10.0.0.11/repos/client-files/yum-clean.sh
curl -o /home/elmer/backupConfigFiles.sh http://10.0.0.11/repos/client-files/backupConfigFiles.sh
chmod +x /home/elmer/shrink-disk.sh
chmod +x /home/elmer/yum-clean.sh
chmod +x /home/elmer/backupConfigFiles.sh
touch /home/elmer/.vm-installed-by-PXE-lab1x64.sh
# clean out the yum cache, and remove the unecessary “linux-firmare” package (it’s about 175 MB):
yum clean all
yum -y remove linux-firmware
yum clean all
%end
# ###############################################
#
#
# ###############################################
# ANACONDA section of KICKSTART SCRIPT:
%anaconda
pwpolicy root –minlen=6 –minquality=1 –notstrict –nochanges –notempty
pwpolicy user –minlen=6 –minquality=1 –notstrict –nochanges –emptyok
pwpolicy luks –minlen=6 –minquality=1 –notstrict –nochanges –notempty
%end
# ###############################################

Build a CentOS7 server for: pxe boot, kickstart, reposync, repotrack, nfs, https (STEP 10)

STEP 10 – Create a PXE BOOT MENU:

This version of the PXE BOOT MENU provides:
  • 2 kickstart configs for CentOS 7 64-bit (c7x64).
  • 1 kickstart config  for CentOS 7 32-bit (c7x32).
  • non-kickstart netinstall of c7x64 or c7x32, using local repos.
  • non-kickstart netinstall of c7x64 or c7x32, using online internet mirror repos.
  • menu DEFAULT is set to boot from local hard drive (to avoid accidentally overwriting an existing system).
*separate vmlinuz + initrd.img are provided to different Distro/Release/Arch options,
how/where to get those is documented in another step.

default vesamenu.c32
timeout 200
menu resolution 1024 768 # Without this, the menu was truncating the displayed lines and wasn’t very readable.
menu background splash.png
ontimeout local
label ks1
menu label ^Install CentOS7x64 from c7pxe, lab1 kickstart, mimized config (~550MB).
        kernel CentOS7x64/vmlinuz
        append initrd=CentOS7x64/initrd.img ip=dhcp ks=http://10.0.0.11/repos/lab1x64.ks
label ks2
menu label ^Install CentOS7x64 from c7pxe, lab2 kickstart, aggressively mimized config ( < 550MB TBD ).
        kernel CentOS7x64/vmlinuz
        append initrd=CentOS7x64/initrd.img ip=dhcp ks=http://10.0.0.11/repos/lab2x64.ks
label ks3
menu label ^Install CentOS7x32 from c7pxe, c7x32lab1 kickstart, mimimized config (~600MB).
        kernel CentOS7x32/vmlinuz
        append initrd=CentOS7x32/initrd.img ip=dhcp ks=http://10.0.0.11/repos/lab1x32.ks
label x64rs2
menu label ^Install CentOS7x64 from c7pxe, NO kickstart
        kernel CentOS7x64/vmlinuz
        append initrd=CentOS7x64/initrd.img ip=dhcp repo=http://10.0.0.11/repos/c7x64/base/
label x32rs2
menu label ^Install CentOS7x32 from c7pxe, NO kickstart
        kernel CentOS7x32/vmlinuz
        append initrd=CentOS7x32/initrd.img ip=dhcp repo=http://10.0.0.11/repos/c7x32/base/
label x64mirror
menu label ^Install CentOS7x64 from http://mirror.centos.org, no kickstart
        kernel CentOS7x64/vmlinuz
        append initrd=CentOS7x64/initrd.img ip=dhcp method=http://mirror.centos.org/centos/7/os/x86_64/ devfs=nomount
label x32mirror
menu label ^Install CentOS7x32 from http://mirror.centos.org, no kickstart
        kernel CentOS7x32/vmlinuz
        append initrd=CentOS7x32/initrd.img ip=dhcp method=http://mirror.centos.org/centos/altarch/7/os/i386/ devfs=nomount
label local
menu label Boot from Hard Drive
        menu default
        localboot 0xffff