Revision date: Dec 17, 2024
TOC
- [netman] - Open networking guide
- [troubleshootman] - Open Troubleshooting guide
- Part I - Basic Unix and Utilities
- Part II - System Administration
- Part III - Hardware and Accessories
- Part IV - Graphical Working Environment
- Part V - Linux Distros, Package Managers, DOS and Windows
- Part VI - Applications
- Part VII - Programming
- Part VIII - Virtualization
- Part IX - Android
- Part IX - Computer Architecture
- Revision History
Basic Unix
********************************************************************************
* - Basic Unix -
********************************************************************************
--------------
| Introduction |
--------------
Unix commands and programs have been traditionaly launched within the context of
a shell, which is a program that allows a user to interact with the OS via a
keyboard and terminal. Unlike many other OSs, the shell has always been an
integral part of Unix based systems. Although today many Unix OSs can be more
or less managed graphically using GUI applications, the true power of the Unix
OS can only be unleashed by way of the shell.
Most of this guide is about using commands within the context of a shell.
Although, many of the programs mentioned here are graphical, most are programs
that are contained in a terminal running one shell or another.
Unix like OSs offer a variety of shells. To the casual user they all seem
about the same, but to more experienced users, which shell they employ is
no small matter. Probably all Unix OS distributions ship with the bash
shell. This is likely the default shell in your installation. It is a very
powerful shell with many features, too many to list here.
I often reference the book "Learning the bash Shell"
when I need to configure the shell, or write a bash script.
Other shells I know of are csh, tcsh, zsh.
The default shell for a given user is stored in the file /etc/passwd.
One can change their default shell with the command chsh.
To switch from one shell to another temporarily, simply type the name of
the shell you wish to switch into in the command line. For example
$ csh
will launch csh in your terminal. To exit that shell simply type
$ exit
Note, a shell is just a program, so you can nest a shell within a shell.
For instance
$ csh
$ bash
$ zsh
$ csh
Will simply run csh, within zsh within bash within csh within whatever your
default shell was.
---------------
| Unix Commands |
---------------
Unix commands can be divided into three categories:
* Shell commands
These are the commands provided by the shell, such as cd, pwd,
alias, for. Some shell commands may have equivalent command line
binary version. For instance echo is a built-in shell command. You
may also invoke /usr/bin/echo. The two may differ in terms of options and
functionality.
Additionaly, it is important to note that different shells may have built-in
commands that are unique to them. For instance the bash shell and
csh will each have commands that are unique to them.
* Shell scripts
These are executable scripts that use either a shell's scripting language,
or some other language such as python to accomplish a desired task.
Complex scripts can be written to accomplish functionality similar to a
program written in C, and the user may not even be aware that he is
executing a script, as opposed to a binary program.
* Binary programs or applications
This cateogry includes any command that launches a binary program.
For example ls, grep, libreoffice.
Many such programs are found in /usr/bin. But there are other locations
where they may be found.
Use the command which to tell you where they are located.
Such programs may rely on the shell to interact with the user and output
information into the terminal or a file, whereas others may be graphical
applications that rely on a graphical system such as X Windows or a
Wayland compositor to launch windows and dialogs.
Most Unix commands come with options or command line arguments.
Unix is also known for its sophisticated pipeline and redirection
features, which other OSs have adopted to some extent over the years in their
own shells.
It is interesting to note that MacOS and the Android OS are derived from Unix
based OSs, and can be managed with Unix style shells.
The following are some basic Unix commands, some of which are more commonly
ecountered and some less. This is by no way an exhaustive list. Your Unix
installation may have thousands of commands at your disposable.
jobs Get a list of jobs
ctrl-z To suspend a job
bg %[job no] Place a job in the background
fg %[job no] Place a job in the foreground
w,who,whodo To see who logged in and doing what on the computer
which filename Get location of a command file (will only identify
those which are in one of the directories specified
in $PATH)
whereis progname Locate source, binary, and or manual for program
whatis Get summary of a command
passwd To change a user's password
yppasswd (or passwd) To change an NIS password (depricated) use passwd
chsh Change shells
hinv Displays contents of system hrdwre inventory table
quota -v (or quota) Checks for quotas
du -k [dir_name] Checks for disk space occupied by current or specified
directory.
df -k [device_name] Check for displace in killobytes
file filename Determine file type of file with name filename
test Test file type (see manpage)
split Split files into smaller units (useful w/ floppies)
tr Translate characters
umask Change the permissions mask
newgrp Run a shell which switches your default group (for file
and directory creation)
xon Run a program on a remote machine
ln -s tgtdir symdir Creates a symbolic link "symdir" that points to tgtdir
zip -r archivename.zip archive Compresses and archives files into one file
(see more about zip and others in Compression/Tarring section)
basename /home/user/dir/fil Strip path (i.e. retains only "fil")
dirname /home/user/dir/fil Retains only "/home/user/dir" (strips file name)
uptime Display amount of time cpu has been running
ls -R dir List all contents of directory "dir", descending
recursively into it.
cd .. Navigate up by one directory
cd - Change to previous directory
------------
| ls command |
------------
Without arguments the ls command lists the contents of the current directory.
However, ls has many arguments that make it a very powerful command.
Some examples:
* List files in a directory given its full path name:
$ ls /home/jdoe/pics/family
* List files in a directory given a relative path name:
$ ls pics/family
Note, you must be currently in the directory containing the pics subdirectory
for this to work. Otherwise ls will return an error message:
"ls: cannot access 'pics/family': No such file or directory:"
* List contents of current directory with added information (long listing)
$ ls -l
Sample output:
-rw-r--r--. 1 lisa work 107239 Nov 3 16:15 invite-list.txt
-rw-rw-r--. 1 lisa work 1843674 Jul 25 2019 'Conference Schedule.pdf'
drwxrwxr-x. 6 lisa lisa 4096 Jun 2 2017 memos
The first field relates to the file's or directory's permissions (see here
for more about permissions and how to interpret this field.)
The second field specifies the number of files in the entry. For files its 1,
and for directories, its the number of files in the directory.
The third field specifies the owner of the file/dir.
The fourth field is the group to which the file/dir belongs to.
The fifth field is the size in bytes of the file. For directory entries
the size is always 4096. This does not indicate the disk space occupied
by its contents. See du -k command above for that.
The sixth field specifies the date/time (by default creation date).
The last field is the name of the file/dir.
* List contents of current directory with one file/directory per line
$ ls -1
* List directory memos rather than its contents
$ ls -d memos
* Combining options
$ ls -d -l memos
or
$ ls -dl memos
Examples of more advanced features:
* List security context (helpful when working with SELinux)
$ ls -Z
* In time field list last access time rather than create
$ ls -l --time="access" invite-list.txt
Valid arguments for --time option are:
- 'atime', 'access', 'use'
- 'ctime', 'status'
- 'birth', 'creation'
* List files in current directory and redirect to a file
$ ls > listing.txt
For more about ls and all its options, refer to the man page
$ man ls
-----------
| Processes |
-----------
UNIX has been a true multi tasking operating system (OS) since its inception
in the 1970s. In a multitasking environment concurrently running processes
are managed by the OS. The OS also provides functionality for starting,
spawning, ending, and monitoring these processes.
Following are some examples of commands for managing processes.
To get a list of processes:
* Formatted output examples with Linux
$ ps -C bash -o "%P%U" --no-headers | grep jdoe
Prints out all processes for bash without header with only PPID and user,
and pipelining into grep (an textual extraction utility), so as to
display only those belonging to user "jdoe".
Note, if another user exists whos username contains the character string
"jdoe" (e.g. jdoes), then his processes will also be shown, since his
user name also matches the search criteria specified to the grep program.
* For Sun (Sparc workstations):
$ ps [-au]
Some of the more commonly used options
a = all processes
u = user format
x = processes with no controlloing terminal
* For HP+SGI workstations:
$ ps [-eaf]
a = all interesting processes
f = friendly format
e = every process
Use the pidof command to get the process ID of an actively running command
or program. For example
$ pidof xterm
will return the PID of all instances of xterm. With the -s option only one
PID is returned.
To kill (end) a process:
$ kill %X # Where X is a job number as returned by jobs command
or
$ kill pid # Where pid is process id as returned by ps command
To suspend a process:
$ kill -STOP pid
To resume a process:
$ kill -CONT pid
Killing a process sends a request to the program to end itself.
Sometimes the program is "hung" and may not respond to the request.
In such a case affixing the "-9" option will kill it forecibly
$ kill -9 pid
Note, the kill command has a built-in shell version and a non-shell version.
Bash Tutorial
********************************************************************************
* - Bash Tutorial -
********************************************************************************
Bash, an acronym for "Bourne again shell" is a GNU derivative of the Bourne
shell (the standard shell for the UNIX OS). It combines some of the best
features of the Bourne shell, csh and korn shell, and is probably the most
widely used shell today. An excellent reference on Bash is the previously cited
book "Learning the bash Shell".
Many online tutorials exist as well, and many of Bash's features are described
in its man page.
$ man bash
See also this article in Fedora Magazine about Customizing Bash.
In this section I provide a cursory overview of Bash, combined with a modest
sample of useful features coupled with examples.
---------------
| configuration |
---------------
* .bashrc
This script is executed everytime the bash shell is run, such as when a
terminal is opened, or when invoking
$ bash
* .bash_profile
This script is executed once at login. This is a good place to set the
PATH environment variable.
* Example of configuration with bash variables:
* To set the shell prompt to "username $"
PS1="\u $" # \u means user name
* Adding a directory to path
PATH=$PATH":/..."
* To set the internal field separator
For example, to set the separator to be a comma separator
IFS=,
Note, The IFS can have more than one separator.
For example IFS=,; sets the separator to be and comma and a semicolon.
IFS defaults to space, tab and newline.
-----------------
| Custom commands |
-----------------
Without the ability to define command shortcuts, using the shell can become
tedious at best. In order to make your shell experience a pleasant experience,
Bash offers three ways to define custom commands.
(1) Alias
The easiest way to create a custom command or a command shortcut is to define
an alias:
$ alias aliasname=command
For example a user who constantly has to issue the command
$ scanimage -l 0 -t 0 -x 210 -y 297 --resolution 200 -p --mode Gray --format=tiff --lamp-off-at-exit=no
may want to define an alias for it, so that all he needs to type to invoke
the command is, say,
$ scangray
Some examples of useful aliases
$ alias lsall='ls -a' # List all files, including hidden files
$ alias lsbw='ls --color=never' # Listed items should be in black & white
$ alias cdproj='cd /home/jdoe/projects'
$ alias dispsched='cd /home/jdoe/schedule/; cat schedule.txt'
$ alias cu='cd ..' # Change one directory up
$ alias cuu='cd ../..' # Change two directories up
If I now issue the command (alias)
$ lsall /home/jdoe
it's as though I typed
$ ls -a /home/jdoe
To remove an alias, issue the command
$ unalias aliasname - removing an alias definition
(2) Functions
Functions are kind of like aliases, except they can take arguments. By
accepting arguments a function is able to accomplish more complex tasks than
aliases. To invoke, simply type the name of the function.
The syntax of a function definition is
function functionname() {
# Place all your commands here.
}
Examples:
function scangray () {
# A function to scan a document and save as a tiff file
if [ $# = 4 ] # Check if four arguments were provided
then
scanimage -l 0 -t 0 -x $1 -y $2 --resolution $3 -p --mode Gray \
--format=tiff --lamp-off-at-exit=no > $4
else
echo "Usage: scangray width height resolution outfile"
echo " width and height are in mm; resolution is in dpi"
echo " outfile is tiff format"
fi
}
This function takes four arguments. If four arguments were not entered, it
will display a message for correct usage.
Another example:
function scanbusinesscard () {
if [ $# = 1 ]
# Function to scan two sides of a business card and tile images
scangray 90 50 200 a1.tif
echo 'Flip over business card and press enter'
read ans
scangray 90 50 200 a2.tif
# Use Imagemagick to tile the two images
montage a1.tif a2.tif -tile 1x2 -geometry +0+0 -depth 8 a3.tif
# Use Imagemagick to convert to pdf
convert a3.tif $1.pdf
# Display resulting pdf
xpdf $1.pdf
# clean up
rm -f a1.tif a2.tif a3.tif
else
echo "Usage: scanbusinesscard filename"
echo " filename should be specified without extension"
fi
}
This function scans two sides of a business card, tiles the images and
outputs a pdf file. It accepts one argument -- the name of the file.
To undo a function definition, issue the command:
$ unset -f functionname
(3) Scripts
Scripts are similar to functions, except everything that would be in the
curly braces of the function definition would simply go into a text file.
As with functions a script can also take arguments.
In order to execute the script, it must have the execute bit set.
i.e.
$ chmod u+x myscript
To execute the script simply type
$ ./myscript
or
$ /fullpath/myscript
or
$ bash myscript
Aliases and functions should go in your .bashrc file. This way they will
be available to you during each shell session without further action on your
part.
If you don't want to place them in your .bashrc file, then you can place them
in some other file, say myaliases, and then issue the command
$ source myaliases
This causes bash to parse, execute and retain whatever is in myaliases as
though it were your .bashrc file.
Similarly if you made changes to your .bashrc file and wish to have the changes
take effect in the current shell session, then issue the command
$ source .bashrc
-----------------------------------------
| Listing aliases variables and functions |
-----------------------------------------
* List all aliases and their definitions
$ alias
* List an alias and its definition
$ alias thealias
* List all variables and functions and their definitions
$ declare
* List variables but not functions (-F option inhibits display of functions)
$ declare -F
* List a function and its definition
$ declare -f functionname
* List all array variables
declare -a
-----------------
| wild characters |
-----------------
Bash supports filename expansion characters:
? Expands to any single character
* Expands to any string of characters
[set] Expands to any character in set
[!set] Expands to any character not in set
Examples:
Suppose a directory has files named fil1, fil2, fil9, joe, sam and samantha.
$ ls *1
will list any file in the directory that ends with a "1", which would be "fil1".
$ ls sam*
will list the files "sam" and "samantha".
$ ls fil?
will list the files "fil1", "fil2", "fil9".
$ ls fil[29]
will list the files "fil2" and "fil9".
$ ls sam?
will not list any files.
$ ls fil[!2]
will list files "fil1" and "fil9".
$ ls *.pdf
will list all files whose extension is "pdf".
--------------------------------------------
| Options - controlling the shell's behvaior |
--------------------------------------------
Bash has many settable boolean options that affect the shell's behavior.
* Setting an option to on
$ set -o optionname
* Setting an option to off
$ set +o optionname
Some options of interest are
* To prevent output redirection from overwritting an existing file:
$ set -o noclobber
* To suppress expansion of files names with wildcards (e.g. *, ?)
$ set -o noglob
Bash allows you to choose between two enhanced editing modes:
vi mode and emacs mode. These modes provide you with with
enahanced editing features when entering text in the command line, each in the
style of the editor they are named for (see "Learning the bash" pp. 28-45 for
more.)
* Turn on vi editing mode (vi style editing)
$ set -o vi
* Turn off vi editing mode
$ set +o vi
* Turns on emacs editing mode (emacs style editing)
$ set -o emacs
The shopt command is a newer feature of Bash whose purpose is to
provide an improved framework for controlling the shell's behavior.
See Bash man page for more.
--------------
| Status codes |
--------------
All Unix commands return a status code when completing. This status code can
be used in a script or function to decide what to do next. For instance, if
the command returned successfully then proceed with the script; if not display
an error message.
Traditionally, a status code of 0 means the command completed successfully,
whereas a status code of 1 (or some other positive number) means some
sort of error occurred.
To determine the status code of a command that was justed executed
$ echo $?
Only the status code of the most recent command is retained.
To save the status code into a variable for future reference, store it in a
variable, as such
$ varname=$?
When writing a script or function, if no exit command is present, the script
or function automatically returns an exit code of 0.
You can use the exit command to control the exit status your script or
function returns.
For example exit 0 will return 0; whereas exit 1 will return exit code 1.
You can use exit codes other than 0 or 1 to indicate some other exception or
circumstance that occured in the program. For example a program might be
designed to generate status codes between 0 and 5 in accordance with the
circumstances cited in the table:
Exit Status Circumstance
----------- ------------
0 Program completed successfully
1 Program exited because input file not found
2 Program exited because of a syntax error in the input file
3 Program exited because has too many command line arguments
4 Program exited because has too few command line arguments
5 Program exited because it cannot create an output file
------------------------
| Programming constructs |
------------------------
Bash, besides being a shell, is also a scripting language. You can enter your
lines of script directly in the shell's command line, or enter them in a file
and execute the file. The scripting features of Bash include the
conditional construct and other flow control mechanisms:
if-then-else, for loop, while loop, until loop, case, and the
select menu feature.
Probably the most commonly used flow control construct in Bash scripting is the
if-then-else construct:
if condition
then
...
elif condition
...
else
...
fi
The evaluation of condition is the exit code of a command.
For example, consider the small script
if cd $HOME/projects
then
echo These are the contents of your \"projects\" directory:
ls -1 $HOME/projects
else
echo You have no \"projects\" directory, or it is inaccessible.
fi
This is what the script does:
* If the command "cd $HOME/projects" evaluates successfully (i.e. the projects
directory exists and the user has access rights to it), then the first branch
of the conditional construct is executed, thus listing the contents of the
projects directory.
* If the command did not evaluate successfully (i.e. there is no such directory,
or the user has no access rights to it), then the second branch of the
conditional construct is executed, informing the user he has no such directory,
or it is inaccessible.
Note, by convention an exit status of 0 signifies success. Therefore, the if
statement invokes the then part of the construct if the condition evaluates
to zero. This is contrary to the C programming language, where a condition that is
non-zero invokes it.
Bash also provides a mechanism for testing boolean style conditions.
The syntax for this test mechanism is
[ boolean-conditions ]
You can use this to compare strings.
* Equality
[ str1 = str2 ]
will evaluate successfully only if the two strings match each other.
* Inequality
[ str1 != str2 ]
will evaluate successfully only if the two strings differ.
* Not Null
[ -n str ]
will evaluate successfully only str is not Null
* Null
[ -z str ]
will evaluate successfully only str is Null (i.e. "")
* Inequalities
[ str1 > str2 ]
will evaluate successfully only if str1 is lexically greater than str2
[ str1 < str2 ]
will evaluate successfully only if str1 is lexically less than str2
Note, in the above, the opening brackets must be followed by a space,
and the closing brackets must be preceded by a space (as in the examples).
Note, the notation [ ] is equivalent to executing the test command.
For example [ -z "hello" ] is equivalent to test -z "hello"
See man page for the test command for more.
You can also perform file attribute checking:
* Check if file exists
[ -a file ]
* Check if directory exists
[ -d directory ]
See above cited book p. 117, or man page on the test command for more about
file attribute checking.
Bash also supports integer comparisons
* Less than
[ int1 -lt int2 ]
* Greater than
[ int1 -gt int2 ]
* Equal to
[ int1 -eq int2 ]
Also available are -le (less than or equal to); -ge (greater than or equals to);
-ne (not equal).
See above cited book p. 121 or man page on the test command for more about
arithmetic comparisons.
Bash also supports logical operators
* AND
condition1 && condition2
* OR
condition1 || condition2
If you wish to apply logical operators within a test expression,
use the -a and -o options.
* Example of AND
[ \( -f fil1 \) -a \( -f fil2 \) \]
is equivalent to
[ -f fil1 ] && [ -f fil2 ]
* Example of OR
[ \( -f fil1 \) -o \( -f fil2 \) \]
is equivalent to
[ -f fil1 ] || [ -f fil2 ]
----------------------
| Command line options |
----------------------
Both Bash scripts and functions in Bash accept arguments or command line
options. Accessing these arguments from within a Bash script or function is
accomplished with special variables reserved for this purpose. These are:
* First command line argument
$1
* Second command line argument, and so forth...
$2, $3, $4, ...
* Total number of arguments passed to the script or function
$#
* All arguments separated by IFS (see above about IFS)
$*
Note, this generates a single string.
* All arguments ("$1" "$2" ... "$N") passed to the script
"$@"
Note, this generates N separate strings.
The difference between the last two is subtle, but has ramifications when
using them in a script.
Example of testing the number of arguments passed, and acting upon the result:
if [ $# = 3 ] || [ $# = 2 ]; then
echo "Two or three arguments passed"
else
echo "Insufficient arguments passed"
fi
---------------------
| String manipulation |
---------------------
In general, Unix like OSs come bundled with many string manipulation tools
such as sed, awk and more, that could be utilized in scripting.
However, Bash has its own built-in string manipulation and extraction feature
for strings stored in a variable.
For example to extract from a string stored in the variable $myvar a substring
that's length characters long, and starting at offset characters from the
beginning of the string, use the syntax ${varname:offset:length}
Example:
$ myvar="Hello World"
$ echo ${myvar:6:5}
Will display "World"
--------------------
| Integer Arithmetic |
--------------------
Bash supports arithmetic operations.
A variable can be declared as an integer using declare -i
The following code block declares myint as an integer, sets it to 5,
increments it by 3 and displays its content after the arithmetic operation.
declare -i myint=5
myint=($myint+3)
echo $myint<\code>
--------------------
| Output redirection |
--------------------
For more about this topic refer to the Bash Programming Intro HOWTO.
There are three file descriptors:
* stdin -- standard input; identified by file descriptor 0
* stdout -- standard output; identified by file descriptor 1
* stderr -- standard error; identified by file descriptor 2
Here are some common invocations involving output redirection:
* Redirect command output to file
$ cmd > file
* Redirect stderr to a file
$ cmd 2> file
* Redirect both stdout and stderr to stdout
$ cmd 2>&1
* Redirect both stdout and stderr to a file
$ cmd > file 2>&1
* Redirect echo or printf output to stderr
$ echo hello >&2
$ printf "hello\n" >&2
* An example of reading a file line by line
$ read < file # redirect contents of file to read command
* An example of looping through a file and printing its content line by line
while read f
do
echo $f
done < file
-----------------------------
| Reading from standard input |
-----------------------------
In the above example the read command was used to read lines from a file.
In fact, the read command can be used to read from standard input just as well.
Some examples follow.
* Example of reading standard input and placing in the variable $ans
$ read ans
In this case the read command will terminate when the user presses the newline
key. To have the read command terminate after the user types a single
character, use the -n option. For example
$ read -n 1
To read a string of up to five characters, and place the result in ans
$ read -n 5 ans
Note, pressing the newline key will terminate the command in any case.
To change the delimeter from newline to a different character use
the -d option. For example, to have space terminate the input issue
$ read -d " " ans
To place a time limit use the -t option. For example
$ read -t 3 ans
will time out after 3 seconds.
To present a text prompt use the -p option. For example
$ read -p "How many apples would you like to purchase? " ans
For more about the read option refer to the bash man page and scroll down to the
description about read
$ man bash
(Hint. To get there quickly, search read /[ in the man page. The backslash
is needed to escape the left bracket character and not because it is part of the
search string.)
-----------------------
| Creating simple menus |
-----------------------
Bash provides the select command that allows you to generate simple menus.
The following example illustrates its usage:
PS3='Which action would you like to perform? '
select action in "Add User" "Delete User" "Quit"
do
case $action in
"Add User" )
echo "Adding user"
# fill in commands to perform specified action
;;
"Delete User" )
echo "Deleting user"
# fill in commands to perform specified action
;;
"Quit" )
echo "Quitting"
break
;;
esac
done
In this example the user will continue to be prompted for an action until he
selects "Quit".
To have the select command terminate after only one iteration, place the "break"
instruction after "esac".
SED Essentials
********************************************************************************
* - SED Essentials -
********************************************************************************
sed is a powerful stream editor which can work in a pipeline.
The best way to illustrate sed is with some examples.
* Example 1. Substitute "World" for "Kitty"
$ cat "Hello Kitty" | sed "s/Kitty/World/"
will output "Hello World"
The "s/.../.../" is the substitute construct.
Between the slash characters ("/") you place the pattern you want to substitute
for (i.e. Kitty) and the word to be substituted (i.e. World).
* Example 2.
Suppose there is file called "guests.txt" containing the three lines
1) Betty
2) George
3) Joan
$ cat guests.txt | sed "s/)/]/"
will output
1] Betty
2] George
3] Joan
The substitute command take the first close parenthesis on each line, ")",
and substitutes for it a close bracket character, "]".
* Example 3.
$ cat guests.txt | sed "s/[0-9])/*/"
will output
* Betty
* George
* Joan
In this example, for each line parsed, the first occurance of a single digit
between 0 and 9 is marked for substition with an asterisk.
The bracket notation [...] is the set notation. It tells sed to search for
any of the characters within brackets. For instance [abz] will mark any
occurance of the letter "a", "b" or "z".
* Example 4.
$ cat guests.txt | sed -e "2s/$/ Jr./"
will output
1) Betty
2) George Jr.
3) Joan
The "2" in the expression is an "address". It tells sed to operate on line 2
only. An address can be a range of addresses as well (e.g. "2-5s/.../.../").
The "$" in the expression means the end of the line. Thus, the substitute
command places "Jr." at the end of the line.
* Example 5.
$ cat guests.txt | sed -e "2s/$/ Jr./" -e '2s/ / Mr. /' -e '2!s/ / Ms. /'
will output
1) Ms. Betty
2) Mr. George Jr.
3) Ms. Joan
In this example sed is given three expressions (each occurance of the -e option
as an argument must be followed by a valid sed expression).
The above invocation is equivalent to invoking sed three times in sequence
(via a pipeline), as such
$ cat guests.txt | sed "2s/$/ Jr./" | sed "2s/ / Mr. /" | sed '2!s/ / Ms. /'
Each line of the file is processed by the first expression, and then by the
second expression, and then by the third.
The first two expressions operate on line 2 only. They prepend "Mr." to the
name, and add "Jr." to the end of the name.
The third expression operates on any line other than line 2. That's what the
"2!" means. It's purpose is to prepend "Ms." to the names on lines 1 and 3.
Also note that the last expression was quoted with single quotes. The reason
for this is that the Bash shell assigns a special (non-literal) meaning to the
"!" character, even when in double quotes. However, when surrounded by single
quotes the character takes on its literal meaning.
These examples illustrate some of the more basic capabilities of sed.
Sed, in fact, is capable of far more sophisticated string manipulations.
Read more about it using the info command
$ info sed
-------------
| SED Scripts |
-------------
Sed can also operate in script mode, which means it reads its string
manipulating expressions from a file. This may be necessary with more complex
text processing tasks. In that case SED is invoked with the -f option
to call on the script.
For example, the following is a script that concatenates all lines in a file:
#!/usr/bin/sed -nf
H # concatenates all the lines together in hold space
$x # places all those concatenated lines in the pattern space
$s/^\n// # removes the newline in the beginning of the line
# place here any other commands preeceded by $ that will process that line
$p # prints pattern space
Save this file as "mysedscript"
Invoke sed:
$ cat file.txt | sed -f mysedscript
Awk Gawk
********************************************************************************
* - Awk/Gawk -
********************************************************************************
Awk is a text and number processing utility originally designed for Unix.
Awk implements a text processing language, and offers more sophisticated
text processing capabilities than Sed.
Gawk is the GNU version of Awk and offers some enhancement/extensions to awk.
I don't offer here an introduction to using Awk.
Refer to the info documentation on awk for a good introduction, as well as
comprehensive coverage of its features.
$ info gawk
----------
| Encoding |
----------
UTF8 characters, or in general, multibye characters, don't process well
with regular expressions in awk (as well as sed).
For instance, the regular expression "." will match a one byte character only.
Therefore, if the text contains multi-byte characters, then multi-byte
characters may be turned into illegal characters through naive substitutions.
Prior to processing a file with awk make sure the file encoding is consistent
with the locale.
If not, you will need to pass on the LANG or LC_CTYPE to awk.
Alternatively, you can convert the file to utf8 encoding, and then process with
awk.
e.g. iconv -f utf16 -t utf8 file > fileconv
Some examples of proper handling of text containing utf8 characters.
(Sorry something got corrupted in the intended Hebrew strings in these examples,
and I no longer remember what they are supposed to be).
* Print all but the last character (note d7 is the first byte of a
non-vowelized Hebrew character)
echo "שלום" | gawk 'BEGIN { print "Here goes:" }
{ a = $0
sub(/(\xd7.)$/, "", a);
print a }
END { print "Did that work?" }'
* Place a double-apostraphe next to last character in a Hebrew date
hebdatestr="תשעז"
echo $hebdatestr | gawk 'BEGIN { FS=""; mystr = "" }
{
for (i = 1; i <= NF-2; i = i + 2) {
mystr = mystr sprintf("%c%c",$i, $(i+1))
}
mystr = mystr "\"" sprintf("%c%c",$i, $(i+1))
}
END {print mystr}'
FS="" makes it so that the record gets split into one-byte fields
The loop, loops through all the two-byte characters but one.
Beware, this program will output corrupt characters if there are any one byte
characters in hebdatestr.
Help and Find Features
********************************************************************************
* - Help and Find Features -
********************************************************************************
This sections highlights some of the the help and find tools available for
Unix commands and files.
----------
| Man page |
----------
The man command (short for manual) is the classic method of obtaining help on
Unix commands, programming resources, and configuration files.
* To display all man entry summaries containing keyword
$ man -k keyword
* To format man page for "commandname" into postscript
$ man -t commandname
Within man you can use various commands to search and maneuver through the man
pages. Some commands to use within man:
* To move down a line
Press enter
* To move to next page
Press spacebar or PageDown key
* To move up a page
Ctrl-b
* Jump to first line
g
* Jump to last line
G
* To search forward for "pattern"
/pattern
* To search backward for "pattern"
?pattern
* Ignore case in searches that do not contain uppercase
-i
To reverse option, type -i again
* Ignore case in all searches
-I
To reverse option type -I again
* To get complete help screen
h
Some man page can be thousands of lines long. Use good searching techniques
to find what you want. Some tips:
* To find a description of an option, search for the option by its switch.
For example:
/-i
* When searching for a word that may be contained in another word, apply
a space before and/or after. For example:
/ man[ ,.]
The search lands on the pattern "man" only when it contains a space before,
and a space or punctuation mark after. This will eliminate landing on words
such as manifold, manager, etc.
----------------------
| Other help utilities |
----------------------
The apropos command searches the man page for entries that contain a
specified keyword in the entry name (usually the command name), and offers a
brief description of the command or entry.
$ apropos keyword
The cheat command provides examples for using a command.
$ cheat command
To install cheat in Fedora:
$ dnf copr enable tkorbar/cheat
$ dnf install cheat
Note, copr is an unofficial Fedora repository.
--------
| Locate |
--------
The locate command can be used to search for any file, directory or executable
in the filing system. It works by maintaining an active database of all files
that are accessible to it. The system or a cron job runs the updatedb command
to keep the database updated, so even recent additions will be searchable.
* Update a locate database
$ updatedb
* Locate "file" in directory structure
$ locate file
* Update database used by locate
$ updatedb
For more refer to this Wikipedia article,
as well as the man page:
$ man locate
$ man updatedb
A secure version of locate is slocate. It's invocation is as such
$ slocate pattern
Refer to its man page for more, or its on-line man page if not installed.
------
| find |
------
The find command can be used to perform a brute force search for a
desired file or files, recursively descending into the directory structure
from the specified starting path.
Note, the locate command (described above) works faster, but may not find
very recent additions.
To find a file:
$ find startpath -name filename -print
Example
$ find / -name matlab -print
If you are looking only for files, use
$ find -type f ...
------------------------------------
| Finding files containing a keyword |
------------------------------------
* To search a file for lines containing a keyword or expression
$ grep 'keyword' filename
* To search a file for lines not containing a keyword or expression
$ grep -v 'keyword' filename
* To search all files in a given directory for a keyword or expression
$ grep 'keyword' dirname /dev/null
Note, the /dev/null argument causes grep to print the file name
in which keyword is found.
* To search a directory and its subdirectories for a keyword or expression
$ grep -R 'keyword' topdirname /dev/null
* To search a directory and its subdirectories for a specific type of file
containing a keyword or expression
$ find topdirname -name '*.tex' -print | xargs grep 'keyword' /dev/null
Note, to ignore case in grep use "grep -i".
Text Utilities
********************************************************************************
* - Text Utilities -
********************************************************************************
Unix OSs usually come bundled with many handy text processing utilities. Some,
like sed, awk and perl, are very versatile, while others are more specific.
Here are some of the more specific utilities:
* Extract differences between two files
$ diff
* Vim's version of diff
$ vimdiff
* Compare left file and right file line by line
$ comm file1 file2
* Discard repeated lines
$ uniq [infile [outfile]]
* Output a multi column file whereby treating file1 as column1, file2 as
column2, etc
$ paste file1 file2 ...
* Removes specified columns from text
$ colrm [firstcol [lastcol]]
* Convert tabs in a file to spaces
$ expand [file]
* Convert spaces in a file to tabs
$ unexpand [file]
* Removes fields from each line of file
$ cut [file]
* Converts a file having one encoding to an alternative encoding
$ iconv
* List hexadecimal values of data (with option for ascii)
$ hexdump file
* Strips file or program name and leaves directory path upto that point
$ dirname name
* Strips directory portion of path and leaves just program or file name
$ basename name
Also has option for stripping suffix.
* Reverses characters in every line
$ rev [file]
* join lines of two files on a common field
$ join file1 file2
For more details on usage and options refer to man page.
These utilities can be used together with output redirection to output into
a file, as well as with pipelining to accomplish more complex actions.
-------------
| Text pagers |
-------------
Text pagers are programs that facilitate perusing a text file.
* The simplest (and oldest)
$ more
* A successor to more. Supports backwards scrolling/searching and more.
$ less
* A more advanced pager, supporting additional paging features such as split
windows for multiple files
$ most
Some keyboard shortcuts
* Split window in two
ctrl-x,2 ctrl-w,2
* Open a new file in the window
:n
* Useful for tabulated data such as database (MySQL, etc.), CSV
pspg
* vim
It can be used as a pager for man page by adding the following to .bashrc
export MANPAGER="/bin/sh -c \"col -b | vim -c 'set ft=man ts=8 nomod nolist nonu noma' -\""
Alternatively, use /usr/share/vim/vim81/macros/less.sh (replace vim81 by
current vim version)
See this Fedora Magazine article for more.
Compression/Tarring
******************************************************************************** Compression
* - Compression/Tarring -
********************************************************************************
Unix like OSs usually come bundled with various compression utilities, and those
utilities that don't come bundled by default can usually be installed from a
repository.
* To compress/uncompress a *.zip file:
zip, unzip
Use -sf option to list files contained in zip file.
Use -r option to recursively descend in the directory hiearchy when archiving.
Use -P option to attach a password to the zip file.
Example:
$ zip -r archivename.zip archive
* To compress/uncompress a *.Z file:
compress, uncompress
* To compress/uncompress a *.z file:
pack, unpack
* To comrpess/uncompress a *.gz file:
gzip, gunzip
* To compress/uncompress a *.bz2 files:
bzip2/buzip2
Hint: apropos can help you identify compression commands on your system
$ apropos compress
-----
| tar |
-----
tar is the Unix archiving command.
* To combine multiple compressed files into one:
$ tar cvf filename.tar files
* To combine archiving with gzip comression
$ tar cvfz filename.tgz files
* To untar:
$ tar xvf filename.tar
* To untar and unzip
$ tar xvfz filename.tgz
or
$ tar xvfz filename.tar.gz
* To untar into a different directory
$ tar xvf filename.tar -C target_directory
(note, -C options tells tar to change directory to target_directory; order matters)
* To list contents of tarred file
$ tar -t -f filename.tar
Can use --list instead of -t
-f option is for specifying tar archive
This also works for gzipped tar files (tgz extension)
See man page for tar for more usage options, including its myriad of options.
------
| Misc |
------
It used to be that storage devices were far more limited in space, and
network/internet connections were not as stable as they are today (e.g.
a connection would drop in the middle of a transmission.)
It was therefore useful to split a large file into chunks, and store
them on a few media; or transmit a few chunks of a file in separate
FTP sessions, and reconstruct the file on the other end. The following
commands accomplish this:
* To split a file in multiple components (example specifies 1 MB components):
$ split -b1m file_name [prefix]
* To combine into one file:
$ cat prefix* > file
Web Command Line Utilities
********************************************************************************
* - Web Command Line Utilities -
********************************************************************************
Some command line utilities are available for accessing websites and downloading
their content. These may be useful for automating the downloading of web
content, music files, general files, or scanning websites for desired
material.
* wget downloads the contents of a URL (can be a file or multiple files
from a website)
$ wget url
Note, if wget cannot obtain a trusted certificate then it aborts download.
This can be corrected with the --no-check-certificate option.
* curl is a tool from transferring data to and from a server using
various protocols, such as SMB, FTP, SFTP and more, including HTTP and HTTPS
(i.e. Web content.)
* HTTrack is a website copier.
These utilities have many options. Refer to the respective man page for more.
See this website for other Web related command line utilities.
Miscellaneous Utilities
********************************************************************************
* - Miscellaneous Utilities -
********************************************************************************
Some useful utilities:
* Display a calendar (without options gives current month)
$ cal [month] [year]
Example:
$ cal 7 2020
outputs:
July 2020
Su Mo Tu We Th Fr Sa
1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31
* Generate a sequence of numbers. Three invocation possibilities:
$ seq last
$ seq first last
$ seq first increment last
Examples:
$ seq 5
outputs 1 2 3 4 5
$ seq 5 8
outputs 5 6 7 8
$ seq 1 2 10
outputs 1 3 5 7 9
$ seq -w 2 10
outputs 01 02 03 04 05 06 07 08 09 10
The -w option zero pads the numbers so all numbers in sequence are of equal
width.
-----------
| X Windows |
-----------
* An image viewer that will display just about any image format
$ xv filename.gif
* View an image on an X-display using the ImageMagick suite
$ display img
* Calendar with a graphical interface and appt book
$ ical
* X version of emacs
$ xemacs
* Graphing program
$ xmgr
* Desktop filemanager
$ /usr/openwin/bin/filemgr
* Set up color database for use with `ls' command
$ dircolors
* Attach a point-to-point network to the serial port
$ slattach
* Get info on window
$ xwininfo
-------------------
| Solaris utilities |
-------------------
* xplaygizmo (Solaris)
* play (Solaris)
* record (Solaris)
* soundtool (Solaris)
* imagetool
* Edit Icons
$ iconedit
* workman
Encodings and Unicode
********************************************************************************
* - Encodings and Unicode -
********************************************************************************
There are many character code mapping specifications.
To get a comprehensive list, issue the command
$ iconv -l
The following encodings are 1 byte encodings:
* ASCII-7
Bit map specification 0-127 (most significant bit unused.)
Includes control characters, punctuation, spacing characters and
the standard Roman alphabet characters [a-z,A-Z].
* ISO-8859-1
Also called "latin1." besides the standard ASCII subset, it defines additional
characters in the 128-255 range.
* ISO-8859-8
Besides the standard ASCII subset, it defines Hebrew characters in the 128-255
range. e.g. Aleph=E0, Tav=FA
* cp-1255 or Windows-1255
Similar to ISO-8859-8, with the addition of Hebrew vowels.
ISO 10646 UCS and Unicode
With the internalization of computing, a unified encoding standard arose that
captures multi-lingual character sets, mathematical and computing symbols,
as well as other specialized characters such as Emojis and more. This standard
is known as Unicode.
Encodings based on this standard are multi-byte encoding schemes.
The most common multibyte encodings are defined in various Unicode standards.
ISO 10646 UCS is the basis of these multi-byte multi-lingual coding schemes.
Amongst them are:
* UCS-4
This is a fixed four byte character code scheme.
The four bytes define a group, plane, row and cell.
It is rarely used in file encoding.
* UTF-16 (previously UTF-2, was once standard in MS-Windows)
This is a fixed 16 bit character code scheme capable of representing
2^16=65,536 characters.
* UTF-8
This is by far the most common and recommended multibyte encoding.
It is a variable multibyte encoding that is capable of encoding all of the
over one million characters permitted by the UCS standard. It is backwards
compatible with ASCII (that is, the ASCII character set is a subset of the
UTF-8 character set) Almost all Web content uses this encoding.
* All 7-bit ASCII characters are encoded as a single byte character,
with the high bit being 0.
"0vvvvvvv"
* All 2 byte characters are composed of a lead byte which has high bits 110
and the filler byte having high bits 10:
"110vvvvv 10vvvvvv"
* All 3 byte characters are composed of a lead byte with high bits 1110
and two filler bytes having high bits 10:
"1110vvvv 10vvvvvv 10vvvvvv"
* All 4 byte characters are composed of a lead byte with high bits 11110
and three filler bytes having high bits 10:
"11110vvv 10vvvvvv 10vvvvvv 10vvvvvv"
Note, the lead bytes are uniquely identified by their high bits, and
cannot be confused with a filler byte.
In the Unicode standard Hebrew characters are encoded as a two-byte sequence:
* Aleph = 05D0 = 0000-0101-1101-0000
The resulting UTF-8 enoding is
* Aleph = D790 = 1101-0111-1001-0000
- The lead byte 1101-0111 (D7) has high bits 110 in line with the rule for
a two byte encoding, with the remainder bits in the lead byte, 1-0111,
indicating we are enocoding a Hebrew character.
- The filler byte 1001-0000 (90) contains the code for the Aleph 1-0000
------------------------
| Other encoding schemes |
------------------------
JAVA
To express a non-ASCII character in a JAVA program, use "\u" followed by the
character code, as such:
\uhhhh
Where hhhh is the character code. The preceding "u" indicates Unicode.
Note, to specify a character based on its octal code, use
\hhh
where hhh is the Octal code of the character.
HTML
To express a non-ASCII character or Glyph in HTML, use "&#" followed by the
character code and terminated with a semi-colon.
xxxxx;
where xxxxxx is a decimal number corresponding to the character code.
Alternatively, use
hhhh;
where hhhh is a hexadecimal number corresponding to the character code.
The preceding "x" is to indicate that a hex character code follows.
Some glyphs can be accessed using their name. For example
¢
= ¢
&
= &
------------------
| Conversion Table |
------------------
A useful hex-binary conversion table:
0 = 0000
1 = 0001
2 = 0010
3 = 0011
4 = 0100
5 = 0101
6 = 0110
7 = 0111
8 = 1000
9 = 1001
a = 1010
b = 1011
c = 1100
d = 1101
e = 1110
f = 1111
Basic Administration
********************************************************************************
* - Basic Administration -
********************************************************************************
-------
| Intro |
-------
Besides the Linux kernel, the Linux GNU operating system consists of many
systems and daemons that provide the user with abundant functionality and
automation. Examples of such systems are systemd (see below), udev (see below),
crond (see below), and others. In subsequent sections, these more advanced
topics will be covered.
This introductory section on computer administration describes some of the more
basic tasks that one may encounter in administrating his own system, such as
managing users and groups, authentication, working with permissions and setting
the computer name. It also discusses the structure of the root directory and
various ways of obtaining information about the system.
Networking and network administration is purposely left out of this guide.
Only minimal reference to networking can be found here. See the
networking guide to learn more about networking, and how to administer a
network.
------------
| Super User |
------------
In Linux and Unix systems, a special user called root or super user has full
power to administer the system. Amonst other things, the super user can:
* Create and erase, read and modify files and directories anywhere in the system
(including files without which the system could not function).
* Start and stop system services.
* Add, remove and configure users and groups.
* Set and modify access permissions and policies.
* Reconfigure any aspect of the system allowed by the system.
The super user's powers are, however, limited to the system for which he is
administrator, precluding remote file systems which are governed by their own
administrator.
Two important commands to know if you are an administrator:
* su
This command can be used to login into a shell as another user
$ su username
e.g.
$ su jdoe
To login as the super user, leave out the argument
$ su
or
$ su root
You will be prompted for the password of the respective user.
Su can also be used to execute a command as another user
$ su username command
e.g.
$ su jdoe cat /home/jdoe/inventory.txt
This can be useful if you don't have permissions to access jdoe's
"inventory.txt" file.
Executing the above command will launch the command as jdoe
Of course, you will be prompted for jdoe's password before the command is
allowed to execute, so you'll either need his password, or have him around
to type it in.
To run a command as super user:
$ su root command
* sudo
This command allows you to execute commands as super user. It is different
than su in that it doesn't prompt for the super user's password before
proceeding to the execute the command. Rather it consults a file called
/etc/sudoers which contains information on who can use sudo, for what commands,
and whether the user's password is required to execute the command.
Sudo, however, never asks for the super user's password.
An example of executing a command using sudo:
$ sudo mount /mnt
Suppose this was issue by user jdoe, and in the sudoers file is a line
jdoe ALL=NOPASSWD: /usr/bin/mount /mnt
The result of the command would be to execute the command mount /mnt
without prompting jdoe for a password.
If on the other hand, sudoers contains the line
jdoe ALL=PASSWD: /usr/bin/mount /mnt
jdoe will prompted to enter his password before executing the command.
The file sudoers accepts wild cards in specifying commands.
In the example, for instance, jdoe can only execute mount /mnt.
If he attempted to specify a different mount point, for example
$ sudo mount /srv/samba/myshare
the sudo command will reject jdoe's request to execute the command.
If, however, in /etc/sudoers there was the line
jdoe ALL=NOPASSWD: /usr/bin/mount *
jdoe could execute the mount command with any argument.
Warning! Be careful with using wildcards in sudoers, as it may create a
security hole, or vulnerability you may not anticipate.
If you know how to use vi/vim then the command visudo can be used
to edit the sudoers file, and will also check it for correct syntax.
Otherwise edit the /etc/sudoers with an editor of your choice.
The latest Fedora Linux installers (and possibly others) do not create a user
during the main phase of installation, nor do they ask to set a root password.
Rather, after the first reboot the system prompts for the creation of a user.
This user is given administrative privileges, which basically means the user
is added to the group wheel, and can execute any privileged command
via sudo (including creating/changing the root user password.)
The relevant lines for this in /etc/sudoers are:
%wheel ALL=(ALL) ALL
# %wheel ALL=(ALL) NOPASSWD: ALL
By default the user must enter his password prior to executing a command via
sudo. To specify that no password is required, comment out the first line,
and uncomment the second. However, for security reasons this is not
recommended. I personally only remove the password requirement when I have a
demanding administrative task to perform, after which I reinstate the password
requirement.
------------------
| Users and Groups |
------------------
When a Linux installer runs through the installation process, at some point
it prompts for a regular (non-root) user to be created. This user is usually
configured to have super user (administrator) privileges by way of the
sudo command.
This arrangement, however, may be insufficient in some circumstances. For
instance a few people sharing one computer will probably require that each
person has their own user and home directory.
Furthermore, there is the concept of a group, whereby users that are assigned
to a particular group are granted access to resources associated with that
group. Any user may be assigned to more than one group. For example someone
belonging to the wheel group has administrative privileges. Someone
belonging to the dialout group has access rights to modem devices.
Some groups, like the ones above, are automatically created by the system during
installation. Others can be created by the administrator on a need basis.
This may be useful if for example certain files, say file1 and fil2
owned by user jdoe should be be made readable to select users other than
himself.
In that case one would do as follows:
* Create a group, say thetopgroup, and join relevant users to this group.
* Assign fil1 and fil2 to thetopgroup.
* Set the permissions of these files, such that members of thetopgroup
have read access to these files.
Note, Permissions are discussed in a different subsection.
Linux/Unix has command line tools for administrating users and groups.
Graphical frontends are also available.
Following are examples some common administrative tasks related to users.
* Adding a user:
$ useradd jdoe
On a given system, users are identified with a numerical value called a PID.
For ordinary users the PID typically starts at 1000, and the useradd command
assigns the new user the next available PID.
To create user jdoe with PID 1005
$ useradd -u 1005 jdoe
See man page for more options.
* Modifying a user's account parameters:
$ usermod [options] username
See man page for all options.
* Deleting a user:
$ userdel username
Following are examples some common administrative tasks related to groups.
* Add user to supplementary group
$ usermod -a -G groupname user
Note, -a = append; without this option all supplementary groups to which user
belonged, he will no longer belong to them.
Example: Make user "y" be a primary member of group "Y"
$ usermod -g Y y
Normally, when a user is added to a group, the user has to logout of his
session for the change to show. Alternatively, the user can open a new shell
$ su -u user
and in that shell the user will be included in the added group.
* To give a user administrative privileges (especially via sudo)
add him to group "wheel".
* Creating a new group
$ groupadd groupname
e.g.
$ groupadd sales
On a given system, groups are identified with a numerical value called a GID.
To create group sales with GID 1010
$ groupadd -g 1010 sales
See man page for more options.
* Modifying a group's parameters:
$ groupmod [options] groupname
or
$ gpasswd -a user grouptoadd
Read more in the man page.
Deleting a group:
$ groupdel groupname
or
$ gpasswd -d user grouptodel
The file /etc/passwd contains a list of all the users on the system.
Each line corresponds to a user and includes the following information about
the users
* username
* UID (User ID)
* GID (Group ID) - principal group to which user belongs.
* home directory
* shell (e.g. /bin/bash)
(/sbin/nologin indicates nologin should be performed when user tries to log in
- useful for non login type of accounts)
The fields in each entry are separted by colons.
The file is plaintext and world readable.
The file /etc/group contains a list of all the groups in the system,
their GID, and who is a member of each group.
The file /etc/login.defs contains the following user account system
settings:
* Where mail boxes reside
* Maximum number of days a password may be used
* Max/min value of UID/GID
* Whether or not to automatically create a user's home directory when creating
a new user.
* and more
Further reading:
* man page for various commands introduced here
* Understanding the passwd file
--------------------------
| Authentication/Passwords |
--------------------------
This subsection is about authentication on Linux.
Linux largely uses a system of libraries known as Linux-PAM to handle various
aspects of authentication.
User information is stored in the /etc/passwd file (see above), although
this file has nothing to do with actual passwords.
The file /etc/shadow contains user passwords in encrypted form, and
other information about the password, such as the last time a password was
changed and expiry information. Note, even if a malicious user was able to
view this file, he would be unable to extract actual passwords from it, as they
are encrypted.
To change a password it is most recommended to use
$ passwd [user]
e.g.
$ passwd jdoe
Use without arguments to change your own password
$ passwd
When changing the password, the typed password will not be echoed on the
terminal, making it resilient to eavesdropping (although someone could still
potentially snatch your password by observing or recording your keystrokes.)
Note, to change another user's password you must do so as super user.
Alternatively, a batch style command is available for changing one or more
passwords.
$ chpasswd
After typing the command and pressing enter, you start typing in one or more
lines, each containing a username and password separated by a colon (without
white space). For example
jdoe:ThatsMyPassword
mike:andThatsMine
Since this command accepts passwords from standard input, you can use the
shell's redirection feature to pass a file of passwords to the command
$ cat passwordsfile > chpasswd
The file must adhere to the format described above.
Beware, this password setting utility requires entering passwords in plaintext,
making your system vulnerable to eavesdropping.
Note, the yppasswd command was once the way to change a password on a NIS
system (see here for more about NIS). This command is now depricated
and the password command should be used instead.
Further reading:
* man page for passwd and chpasswd, pam, shadow
* Understanding the shadow file
--------------------------------
| Locking and disabling accounts |
--------------------------------
* To lock a password
$ usermod --lock username
* To disable an account
$ usermod --expiredate 1970-01-01
Alternatively
$ usermod --lock --shell /bin/nologin username
Alternatively
$ passwd username -l
* To unlock account
$ passwd username -u
------------------------
| Permissions/Privileges |
------------------------
Unix, being a multiuser system, needs to secure one user's files from
unathorized access by other users on the same system. POSIX permissions is a
standard that all POSIX compliant OSs (i.e. Unix like OSs) support with regard
to accessing files and directories, both internally and on a network.
The main command for administering permissions is chmod, which is short for
"change permissions mode".
To change permissions for a given file, issue
$ chmod mode filename
mode is some arrangement of the characters ugo+-=srwx, whereby
* The possible permission scopes are
u = user, g = group, o = anyone else
* The permission flags are
s = system, r = read, w = write, x = execute
(see manpages for additional flags)
For example, the invocation
$ chmod u=rw,g=r,o= fil.txt
causes the permissions on file fil.txt to be set so that it would be
* readable and writable by its owner
* only readable by members of its group
* not accessible at all by others
Alternatively, a shorthand 3 digit octal notation is available for setting
permissions
$ chmod 640
The command interprets the digits as such
ugo (user, group, owner)
* 6 = 110 ; this is the read bit (user and group are set, other is not)
* 4 = 100 ; this is the write bit (user is set; group and other is not)
* 0 = 000 ; this is the execute bit (is unset for everyone - no one can
execute fil.txt)
To observe the newly assigned permissions, type:
$ ls -l fil.txt
-rw-r-----. 1 jdoe sales 57 Jul 1 16:35 fil.txt
The interpretation of the first field of the output
* rw- are the permissions for the owner
* r-- are the permissions for those in the "sales" group
* --- are the permissions for others
To make a binary or script executable, set the executable flag (x) in the
permissions of the file.
e.g.
$ chmod a+x myscript
will set the executable flag for the owner, group and others.
$ ls -l myscript
Say, the old permissions were
-rw-r--r--
Then the new permissions will be
-rwxr-xr-x
It is also necessary to set the executable flag for a directory you wish to
view its contents or change into.
e.g.
$ chmod a+x mydir
The new permissions will have all the execute flags set.
-rwx-r-xr-x
The following example illustrates the system bit permission flag.
To change an executable to permit system privileges:
$ chmod u+s filename
This means that when someone with privileges to run this program does so, the
program is given access to system resources. For example access to a sound
card; even though the user himself has no access rights to the sound card
device, the running program does. Note, if not used judiciously this could
present a security hole.
More advanced access control can be achieved with extensions to POSIX
permissions, such as offered by SELinux and similar software (see section on
SELinux for more.)
---------------
| Computer Name |
---------------
The computer's name is a settable parameter. It usually carries more
significance when the computer is part of a network, especially when functioning
in the capacity of a server.
Note, the computer's name has nothing to do with the physical computer.
In a dual boot configuration for instance, whereby you have two Linux
installations (or a Linux and Windows installation), each system your bring up
may have been configured to have a different computer name.
Some computer name related commands:
* Show or set the system's host name
$ hostname
To temporarily set the hostname (till the next boot)
$ hostname thename
To make the change permament, edit the file /etc/hostname
and change its content to the new hostname.
See also hostnamectl below.
* Show or set the system's NIS/YP domain name
$ domainname
* Show the system's DNS domain name
$ dnsdomainname
* Show or set system's NIS/YP domain name
$ nisdomainname
* Show or set the system's NIS/YP domain name
$ ypdomainname
There are actually three types of hostnames:
* A high level descriptive name, such as "Mike's Laboratory".
* A hostname used to initialize the kernel at boot. If not set,
this defaults to "localhost". This is the hostname stored in /etc/hostname
and displayed by the hostname command.
* A hostname provided by the network in case the hostname is the trivial
"localhost".
The following command is used to manage hostnames on the system:
$ hostnamectl set-hostname desired_name
See man page for hostnamectl for more about the different kinds of hostnames
and how to use this command.
Some relevant files:
/etc/hostname
/etc/hosts
/etc/sysconfig/network (contains networking flag, and hostname)
/etc/resolve.conf
-----------
| Processes |
-----------
Unix is a multitasking operating system. At any given moment many processes
are running concurrently. Some background material and commands used to
manipulate and display running processes is provided in section Basic Unix.
To list all files being used by processes use the command lsof.
Some examples:
* To list all processes accessing file doc.pdf
$ lsof doc.pdf
The output is
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
xpdf 12467 jdoe 21r REG 8,5 149831 1303752 doc.pdf
Here the process ID is 12467 and corresponds to an invocation of the xpdf
command on the file doc.pdf.
* To list all processes accessing the file .fil1.txt.swp (a swap file being
used by an active vim session)
$ lsof /home/jdoe/.fil1.txt.swp
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
vim 10320 jdoe 4u REG 8,5 11188 1370728 /home/jdoe/tmp/.b.txt.swp
* If your home partition is on /dev/sda5 then to list all processes accessing
the device
$ lsof /dev/sda5
The header and a selected line from the output
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 20231 jdoe cwd DIR 8,5 4096 1563092 /home/jdoe/tmp
The full listing will give you all the processes that are accessing files and
directories on your home directory. This command may be used by an
administrator to identify potential malicious activity by identifying
processes that should not be accessing jdoe's home directory.
* To list all processes using the soundcard (dsp)
$ lsof /dev/dsp
Note, modern Linux installation place all sound card device node names in
/dev/snd rather than utilize the single device /dev/dsp.
* To list all processes
$ lsof | less
Since in this case lsof will list numerous processes, pipelining into a text
pager is a good idea.
lsof has many filtration options. See man page for more.
--------------------
| The root directory |
--------------------
Some important directories in the root directory:
* /bin
Contains important binaries, such as shell programs (e.g. bash), standard Unix
utilities (e.g. grep, sed), editors (e.g. vi, vim, emacs), applications (e.g.
libreoffice, xpdf, evince), and much more. A typical installation may
contain something like 3000 binary executibles in this directory.
* /boot
This directory contains bootloader binaries and files (e.g. grub modules,
and configuration files), Linux Kernel(s) and initram image(s).
* /dev
This directory contains files through which all devices in Linux are
accessed and/or controlled. Both block devices (e.g. storage), and character
devices (e.g. mice, keyboard) are accessed through files. There is a saying
that in Unix everything is a file. Although not quite so (e.g. you will not
find a device file by which you access your ethernet card), it is not far
from the truth. For example, your audio device may be represented by the
file /dev/dsp, in which case the command
$ cat mysong.raw > /dev/dsp
will play the contents of "mysong.raw".
It is important to note, however, that the device file is merely an interface
to the device, and that there is a real device driver (whether a module, or
built driver in the kernel) that makes this file useful. A device file by
itself without a driver backing it, can't interact with the device.
For more about the /dev/ directory refer to this page.
* /etc
This directory contains mostly configuration files which affect all aspects
of system configuration and behavior. See subsection below for more about it,
including a short description of selected configuration files.
* /home
Contains user's home directories.
* /lib
Contains essential shared libraries and kernel modules.
* /sbin
Contains system binaries (e.g. lspci, visudo, useradd). This directory has
many system administration utilities.
* /usr
This directory mimics somewhat the root directory in that it contains its
own bin, sbin and lib directories. Note, on some systems /bin is actually a
symbolic link to /usr/bin. Similarly for /sbin and /lib. The name usr
stems from its original purpose, which is to house user related directories
and files.
For instance, /usr contains the share directory, which contains fonts,
templates, pixmaps, system sounds, and many other resources used by the
system and installed applications.
* /var
This directory contains subdirectories and files that store system data which
is variable. For instance /var/spool/mail is a directory containing
user email inboxes. It also contains a print spooler, cache files, lock
files, and more. It is not a place for configuration files.
It is often placed on a separate partition since it is not uncommon for it
to expand with time. If placed on the same partition as the rest of the
directories on root, it may expand to the point where there is no room left
on /, which will prevent the system from functioning properly until some space
is cleared out.
For more, see here.
* /proc
This is a virtual filesystem. That is, none of the files in this
directory are to be found on a storage device. The files in /proc are all
created by the kernel during runtime and kept in RAM. Various system
parameters are kept in /proc files.
Some administrative commands, simply read a file in /proc and present
information based on the contents of the file.
Some commands nicely format the contents of a file in /proc (e.g. lsmod
yields similar output as cat /proc/modules.)
The file /proc/uptime contains the amount of the time the system has been up.
$ cat /proc/uptime
The command uptime takes the information in /proc/uptime and presents
it in human readable form
$ uptime
For more about /proc, read here.
* /sys
This directory is similar to /proc. The kernel exports hardware related info
into files in this directory, which are used by udev to manage devices.
As with /proc, it is a virtual file system. Also, refer to subsection sysfs.
For more refer to this webpage.
For examples on accessing hardware info, and toggling parameters refer to this
webpage.
* /root
This is the administrator's (root) home directory. For functionality
and security purposes it sits on / rather than with other user home
directories in /home.
For more about the root directory (/) refer to this webpage.
-------------------
| The etc directory |
-------------------
The etc directory contains many configuration files. Many of these files come
pre-installed with the system, or are generated automatically during
installation. Others may be installed when an application or binary package
is installed.
It is not unusual that on occasion you may want or need to tweak various
settings in these configuration files.
Some of the files and directories you will enccounter in /etc are:
* File system related
fstab - This file contains descriptive information about the various
file systems associated with the given OS installation.
This is a very important file, as the kernel references it to determine
which files systems to mount during boot and runtime, how to mount them,
and various mounting parameters.
It is mainly used for mounting static file systems, such as /boot, /, /home,
NFS and Samba shares. During installation fstab will be automatically
generated by the installer. Subsequently, the system administrator may
modify it as the layout of the system changes.
For more about it see man page.
$ man fstab
mtab - A list of currently mounted file systems. There is no point in
editing this file, as it is dynamically generated and modified.
* Shell related
profile - System wide functions and aliases for bash (enviornment
stuff)
bashrc - System wide functions and aliases for bash.
csh.login - System wide csh login script (sets up $PATH variable and more)
csh.cshrc - System wide cshrc script
profiles.d - Contains various "sh" and "csh" scripts for various programs
(e.g. kde, vim and more)
* User related files
passwd - Contains a list of all users on a system and information about
them (e.g. username, PID, principal group, default shell)
group - Contains a list of groups. For each group it contains the
group's GID, and its members.
sudoers - A file for configuring users to be able to use the sudo
command, which allows ordinary users to run commands as administrator.
* Network related (see Network Guide for more)
resolv.conf - A file used by networking software to locate DNS
Server(s). See man page for more.
hosts.conf - A list of static hosts (name and IP) on the local network.
If a local DNS server is running on the network, then it is better to have
the name to IP mapping be done by it rather than by hosts.conf. The reason
for this is that hosts.conf has to be maintained on each individual
workstation, whereas the DNS table need only be updated on the DNS server.
smb.conf - Configuration file for the Samba server. If you add SMB
shares in your LAN, you will need to add entries for them in this file.
* Miscellaneous
ppp - A point-to-point protocol configuration directory
mailcap - A system wide mailcap (tells mail clients what program to launch
for various file extentions.)
Most configuration files have man page. Refer to them for more.
Also for more about the /etc directory and a more comprehensive list of
configuration files, refer to this webpage.
-------
| sysfs |
-------
(Since Kernel 2.6)
The Linux kernel exports detailed information about the machine's physical
hardware, as well as attached hardware into files in the directory /sys.
The information in that directory can be used by system services, as
well as user space applications, to learn and act upon connected devices.
/sys/class/net contains a list of network devices as created by udev.
These are symbolic links to the kernel's version of the file, located in
/sys/devices.
USB devices and more can be found in subdirectories of /sys/devices/pci0000:00
See also udevadm utility described below (in subsection udev).
-------------
| System Info |
-------------
The following is a list of useful commands for monitoring the system and
displaying system info.
* To display a dynamic real-time view of a running system
$ top
Note, this is one of the more used commands for monitoring system activity.
Some variants of top are htop, glances, conky, nmon, atop, gtop, Linux process
viewer. See this webpage for more.
* To print all known system information (e.g. processor type, kernel name/version,
etc.)
$ uname -a
* For a debian distribution name look in package manager directories.
e.g. /etc/apt/sources.list
* To retrieve kernel messages from the file /proc/kmsg (do not access this file
directly), issue the command
$ dmesg
All kernel messages since bootup will appear. It's a good idea to use a text
pager (e.g. more, less) to page through the output, or pipeline into a text
extraction utility (e.g. grep) to focus on specific messages.
In the man page the short description of this command is "print or control the
kernel ring buffer" (ring buffer means bootup messages.)
* To display information about system memory
$ free
By default the output is displayed in Kilobyes. See man page for other
possible display units.
* To display information about processes, memory, paging, block IO, traps,
and cpu activity
$ vmstat
* To display amount of time CPU is up and other info
$ uptime
* To display pci cards
$ lspci
Some useful options:
* -v for verbose
* -vv for very verbose
* -k to show driver in use
* As mentioned above, the /proc partition contains much useful system info
For more about it see man page on proc.
To get information about partitions
$ cat /proc/partitions
-----------------------------------
| Configuration of various services |
-----------------------------------
A set of graphical configuration utilities are available for various Linux
services, and are named system-config-x, where you would substitute for
x the service you wish to configure. For instance system-config-printer,
system-config-firewall, system-config-users, system-config-keyboard,
system-config-language.
These configuration utilities do not usually come preinstalled, but should be
available in your distribution's repository.
A text based configuration utility setup can also be used to configure
various services.
------------
| dd utility |
------------
Warning! dd must be used with utmost caution. A mistake in one of
the arguments can quietly and irreversibly wipe out portions of the disk you
did not intend to, or the disk in its entirety.
The purpose of this utility according to the man page is:
"convert and copy a file".
Practically speaking, this utility is used for copying raw images onto storage
devices, or copying from a storage device to a file.
For example to copy the raw data on disk partition /dev/sde2 into a file named
raw.img, invoke:
$ dd if=/dev/sde2 of=raw.img bs=4096 status=progress
Some useful options:
* if=filein
Read from filein instead of stdin (standard input)
* of=fileout
Output to fileout instead of stdout (standard output)
* ibs=NUMBYTES
Read up to NUMBYTES at a time. The default is 512 bytes (size of a sector).
* obs=NUMBYTES
Write up to NUMBYTES at a time. The default is 512 bytes.
* bs=NUMBYTES
Read and write up to NUMBYTES at a time (overrides ibs and obs). The
default is 512 bytes. e.g. bs=1M (sets this parameter to 1 MiB; MiB = 1024x1024 bytes)
See below for a list of permitted sffixes.
* conv=fsync
Write all data to destination before completing command.
This has relevance when writing to a storage device to which the data is
buffered in intermediate memory before being written onto the storage
medium. This option specifies that the command should complete only after
everything has been written to the disk.
* skip=N
Skip the first N blocks (N*ibs bytes) when reading the input.
* seek=N
Skip the first N blocks (N*obs bytes) in the output.
* count=N
Copy on N blocks
A partial list of suffixes for bytes or blocks:
b =512, kB =1000, K =1024, MB =1000*1000, M=1024*1024, GB=1000*1000*1000,
G =1024*1024*1024
See man page for more.
Time Keeping
********************************************************************************
* - Time Keeping -
********************************************************************************
----------------
| Hardware Clock |
----------------
Linux has a system clock which is independent of the built-in (CMOS)
hardware clock or real time clock (RTC). It uses the hardware clock to update
itself. Use the hwclock command to query or set the hardware clock.
For example, to match the hardware clock to the system time, issue (as sudo)
$ hwclock --systohc
Alternatively
$ hwclock -w
Note, this is easier than using the BIOS to adjust the hardware clock.
Note, if you want your hardware clock to be set to UTC (i.e. basically Greenwich
Meantime, GMT), then add the option --utc.
$ hwclock -w --utc
If you want it set to local time then add the option --localtime.
Note, the hardware clocks on motherboards are not known for their accuracy.
Therefore, it is a good idea to maintain an accurate system clock (e.g. by
syncronizing with a time server - see NTP server below), and occassionally
correcting the hardware clock by matching it to the system clock (as was
illustrated in the example above.) This can be automated with a cron job.
See section on Cron for more.
See man page on hwclock for more details about usage and options.
Also see man page on rtc for more about real time (hardware) clocks.
Refer to this Archwiki article for more about time keeping.
-----------
| Time zone |
-----------
Time zone info is contained in /etc/localtime which is just a soft-link
to a file in the directory /usr/share/zoneinfo, which contains all the
different time zone files.
Use timedatectl to control time zone settings
$ timedatectl list-timezones
$ timedatectl set-timezone Zone/SubZone (e.g. Asia/Jerusalem)
Other relevant utilities for managing time and time zones:
date, tzset, settimeofday, adjtimex
Relevant files:
/etc/adjtime, /usr/share/zoneinfo/, /etc/localtime
For string time formats used in programs such as date, issue
$ man strftime
------------
| NTP Server |
------------
If you want your computer to syncronize its date/time with an NTP server,
you must have an NTP daemon (ntpd) running (usually specified as a service).
Note, the port used by NTP servers must be accessible (this could be a problem
with facilities that highly restrict port accessiblity.)
To query an NTP server and update the machine's date/time, use
$ ntpdate -q pool.ntp.org # (outdated)
or
$ ntpd -q
Normally ntpd is run as a systemd service, and the user need not be concerned
about manually updating the system date and time. In Fedora the service that
handles time syncronization is systemd-timesyncd.service.
To see if it is enabled type at the command prompt
$ systemctl is-enabled systemd-timesyncd.service
For further reading refer to Configuring NTP Using ntpd.
Language and Locale
********************************************************************************
* - Language and Locale -
********************************************************************************
---------------------
| Language and Locale |
---------------------
A locale is a set of parameters that is used to convey to the Operating System
and applications region specific preferences, such language, currency,
date format, etc.
On POSIX platforms (OSs conforming to Unix like standards) these parameters
are specified by the following set of locale variables:
LANG, LC_CTYPE, LC_NUMERIC, LC_TIME, LC_COLLATE, LC_MONETARY, LC_MESSAGES,
LC_PAPER, LC_NAME, LC_ADDRESS, LC_TELEPHONE, LC_MEASUREMENT, LC_IDENTIFICATION,
LC_ALL
To list the locale variables that were set for one's system, issue the command
$ locale
Sample output:
LANG=en_US
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC="en_US.UTF-8"
LC_TIME="en_US.UTF-8"
LC_COLLATE="en_US.UTF-8"
LC_MONETARY="en_US.UTF-8"
LC_MESSAGES="en_US.UTF-8"
LC_PAPER="en_US.UTF-8"
LC_NAME="en_US.UTF-8"
LC_ADDRESS="en_US.UTF-8"
LC_TELEPHONE="en_US.UTF-8"
LC_MEASUREMENT="en_US.UTF-8"
LC_IDENTIFICATION="en_US.UTF-8"
LC_ALL=en_US.UTF-8
In the above example, the locale being employed is en_US.utf8, which specifies
the use of the English language in the US region. A further specification of
this locale is to employ the UTF-8 character set.
Typical locales in use by myself:
C, en_US, en_US.utf8, he_IL.utf8
Note, the C locale, unlike other locales, was designed for computers as
opposed to people (see here.)
The LANG environment variable specifies the locale language (this is the
main variable in the locale, as it establishes defaults for all other
variables.)
e.g. LANG=en_US.utf8 or LANG=C or LANG=he_IL.utf8
Setting your computer's locale may affect how your desktop and applications
behave. For instance, the desktop or an application may use the locale settings
to display menus and help features in the language of the locale. A spreadsheet
program may probe the locale variable LC_MONETARY to set the default currency
(e.g. Dollar, Yen, Frank, etc.)
Applications may make use of some locale variables and not others.
For instance LC_COLLATE may be used by some applications and utilities (e.g.
sort) to determine lexical and numerical sorting order.
If LC_COLLATE=C then the sorting order would be that which is specified by
the C locale.
-------------------------------
| Overriding the default locale |
-------------------------------
If you wish to launch an application or invoke a command, applying to them a
locale different than the active or default locale, then precede the command
with the desired locale variable settings. For example
$ LC_MONETARY=fr_FR.UTF-8 libreoffice
will launch libreoffice telling it you would like to use French currency by
default. Note, language and other LC_* variables are unaffected, and
libreoffice will use the default locale for those other variables.
Another example:
$ LC_ALL=C LANG=fr_FR.UTF-8 libreoffice
will launch libreoffice with all LC_* variables set to "C", and language
set to French.
LC_ALL is a special locale variable, used to override the other LC_* variable
settings. For instance if you set LC_ALL=en_GB.UTF-8, then an application
will ignore the values of the other LC_* settings, and instead consider them
to have been assigned a value of en_GB.UTF-8. It is not considered good
practice to employ this variable as a way to change your system's locale on
a permament basis. It should rather be used as a means to temporarily
override the default locale when executing a particular command, script or
application.
-----------------------------------
| Locale related commands and files |
-----------------------------------
Some useful commands:
* Display all locale variables for current locale
$ locale
* Display all locales available on the OS
$ locale -a
* Generate a specific locale
$ locale-gen [localname] ........
e.g.
$ locale-gen he
Refer to the on-line man page for more.
* To control the system locale and keyboard settings, use
$ localectl [options] command ...
See man page for more.
Relevant files:
* A global locale configuration file
/etc/locale.conf
* A user specific locale configuration file
~/.config/locale.conf
* Locale setting for virtual consoles
/etc/vconsole.conf
For more about locale refer to the following Wikipedia article.
For more technical details on setting and manipulating locales refer this
Archwiki page.
File System Admin
********************************************************************************
* - File System Admin -
********************************************************************************
-----------------------
| Mounting File Systems |
-----------------------
To mount a filesystem
$ mount -t filesystem device directory
Examples:
$ mount -t msdos /dev/hda1 /mnt/dos
$ mount -t vfat /dev/sda1 /mnt
Note directory and device have to exist. Therefore in the first example you'll
need to create the dos directory prior to mounting.
$ mkdir /mnt/dos
For available filesystems see manpages on fstab or mount.
To automatically mount a filesystem during the boot process include an entry in
the /etc/fstab file (described below in subsection fstab).
Note, if making changes to /etc/fstab systemd will still use a cached version.
Invoke the following command to make systemd aware of the new version
$ systemctl daemon-reload
To display all mounted file system and various information on them:
$ df
To also display file system type see next subsection.
To mount a zip drive (serial or parallel):
$ mkdir /mnt/zip100
$ mount /dev/sda4 /mnt/zip100
(This assume /dev/sda4 is the device node for the zip drive, and that the driver
for the zip drive is installed.)
To mount a flash drive with a FAT filing system:
1) Attach the flash device.
2) Mount /media/usbdisk
$ mount -t vfat -o uid=jdoe,gid=broker /dev/sdb /media/usbdisk
The options specify that any file or directory on the mounted file system
should be owned by user "jdoe" and belong to group "broker".
If there is a problem with mounting the flash drive, the following commands
may help home in on the problem.
1) At the prompt type "lsusb". All recognized USB devices attached to
the computer should be listed. If the flash device is not one of
them, then either it is not:
* not recognized
* not attached properly
* not working
2) If the previous step was successful (i.e. flash drive was recognized)
then type the following command line to determine which SCSI device is
"linked" with the flash device.
$ dmesg | grep -i "SCSI device"
Some lines may be identical. Look for sda, sdb, sdc, etc...
The one the seems to match your flash device (e.g. listed memory size
matches) is the correct device.
To mount an NTFS system:
$ ntfs-3g device mount_point
e.g.
$ ntfs-3g /dev/sdc1 /mnt
or
$ mount -t ntfs-3g device mount_point
e.g.
$ mount -t ntfs-3g /dev/sdc1 /mnt
To mount an SMB system:
$ mount -t cifs -o user=username //myserver/myshare mount-folder
e.g.
$ mount -t cifs -o user=jdoe //10.0.0.100/mypics /mnt
For more see here.
To mount an EXFAT system:
$ mount -t exfat device mount_point
e.g.
$ mount -t exfat /dev/sdc1 /mnt
(should have fuse-exfat installed)
------------------------------
| Displaying file systems type |
------------------------------
To display all mounted file systems and their type
$ df -T
Other commands that reveal the file system type
$ fsck -N /dev/sd* # the -N disables checking
$ lsblk -f
$ mount | grep "^/dev"
$ blkid /dev/sd*
$ sudo file -sL /dev/sd*
------------
| fstab file |
------------
The /etc/fstab file contains static mount entries. Each entry specifies
the file system to mount, the type of file system, and the mount point (i.e.
on which directory to mount it).
A sample entry:
LABEL=sandiskext /home/jdoe/myportable ext4 defaults,nofail 1 2
|______________| |___________________| |__| |_____________| | |
A B C D E F
The entry contains six fields A-F.
* A: Device identifier (could be device node e.g. /dev/sda6, label or UUID)
* B: Directory serving as a mount point (directory has to exist)
* C: File system type
* D: Options
* E: Specifies if file system is to be dumped (1) or not dumped (0 - default)
* F: Specifies order in which fsck checks files systems during boot
Note, if the mount point is a non-empty directory, and you mount a file system
on it, then the previous contents of the directory will be inaccessible until
umounting.
An example of mount entries for the root and boot file systems
/dev/sda2 /root ext4 defaults 1 1
/dev/sda1 /boot ext4 defaults 1 2
In this example device nodes were used to identify the root and boot partitions.
In general, however, it is better to specify labels or UUID rather than device
nodes as device identifiers. The reason is that device nodes may change as
additional disks are added. For instance when adding a disk udev may encounter
that disk first and, thus, assign to it device node /dev/sda, whereas the disk
containing the system will be assigned the next available device node /dev/sdb.
Thus the root and boot partitions will be named /dev/sdb2 and /dev/sdb1,
respectively, rather than /dev/sda1 and /dev/sda2.
I altered the above example to use UUIDs to identify the root and boot
partitions:
UUID=432e896f-c6b2-424b-ae05-f7b5ab05013b /root ext4 defaults 1 1
UUID=8a45361b-bfc9-43f5-9f0d-9c2c4c83f749 /boot ext4 defaults 1 2
For more about fstab, see manpages
$ man fstab
----------
| RAM disk |
----------
Sometimes you may want to use a memory based file system rather than one that
resides on a storage device. For example you want to create a "tmp" directory
to store temporary files. Files in tmp (which are resident in RAM) will be
created and accessed much faster than if on a storage device. Another
advantage is that writing files to memory spares you of the wear and tear you
get when writing to a the harddrive, especially an SSD.
Linux provides two types of RAM disks: ramfs and tmpfs.
ramfs is the older kind and has mostly been replaced by tmpfs. Thus, it is
recommended to use tmpfs. Any tmpfs file systems will be listed alongside the
more conventional files systems when invoking the df command. Try it out.
You should see at least one such system.
$ df
tmpfs 12264016 824 12263192 1% /tmp
To create and mount a RAM disk of size 50MB
$ mount -t tmpfs -o size=50m tmpfs /home/jdoe/tmp
To make a RAM disk persistent accross boots, place an entry for it in
/etc/fstab, such as
tmpfs /home/jdoe/tmp tmpfs nodev,nosuid,noexec,nodiratime,size=50M 0 0
Note, however, the contents of the RAM disk will disappear when the computer is
turned off or restarted.
If you wish to execute binaries or scripts from the RAM disk, remove the
noexec option from the list of options in fstab entry.
Refer to this webpage and this webpage for more about RAM disks on Linux.
--------------------------------------
| Automounting of file systms - autofs |
--------------------------------------
autofs is a package that contains the necessary software to mount
file systems, including plugable devices and remote file systems on a need
basis.
For a detailed discussion see this Archwiki article or
this Ubuntu documentation page.
Basic configuration is as follows:
/etc/autofs/auto.master is the master template configuration file.
It references sub-template files.
It can contain a line like:
/media/misc /etc/autofs/auto.misc --timeout=5
Type "man auto.master" for more about this file and its configuration.
The file auto.misc is a sub-template, and any automatically mounted devices
associated with it will be mounted on /media/misc.
A timeout field indicates how much time to wait before timing out in attempting
to mount an associated device.
Besides configuring the templates, add the line
automount: files
to the file /etc/nsswitch.conf (name service switch)
After configuration start the autofs.service and test if things are working
$ systemctl start autofs.service
To make persistent accross
$ systemctl enable autofs.service
Make sure permissions to following template files are 0644
auto.master, auto.media, auto.misc, /etc/conf.d/autofs
------------------------------------
| Creating and Checking File Systems |
------------------------------------
To create a file system (also known as formatting a partition) use the command
$ mkfs [-t fstype] filesys [blocks] [-c]
The command line arguments are as follows:
* fstype refers to the file system type
The possibilities are:
(Linux) ext2 (default), ext3, ext4
(Microsfot) exfat, vfat, dos, xfs
(Other) Minix and more
Refer to man page section "SEE ALSO".
* filesys can be the device node name (e.g. /dev/sda1) or a file that
contains the file system.
* blocks specifies the file system size. If left unspecified,
file system will fill up the entire capacity of the partition or
storage device.
* The "-c" option can be used for checking and excluding bad blocks.
(see also e2fsck below.)
Note, the command mkfs actually calls the relevant file system making
command (e.g. mkfs.ext4, mkfs.vfat, etc.)
To check a Linux file system (ext2/3/4) use
$ e2fsck filesys
* -c option can be used to check for bad blocks for reading
* -c -c option - same as above except performs a non-destructive write as well
To create an NTFS file system:
$ mkntfs /dev/sdxx
Add -U option to generate a random UUID.
The command defaults to zeroing and checking for bad sectors.
See man page for more options.
To create a Fat32/Fat16 file system:
$ mkdosfs -F 32 /dev/devname
$ mkdosfs -F 16 /dev/devname
To create an exfat system (should have fuse-exfat installed)
If partition doesn't yet exist, use gdisk to create a partition with code 0700.
Create file system on that partition:
$ mkfs -t exfat /dev/devname
Add -f option for a full format (zero out disk; takes much longer)
$ mkfs -t exfat -f /dev/devname
If the partition has sensitive data, then use the -f option.
------------------------------------
| Resizing/Manipulating File Systems |
------------------------------------
To resize an existing Linux partition use resize2fs or fsadm.
Make sure partition can contain resized file system.
Example of resizing the file system on /dev/sdb2 to fill up the partition:
$ e2fsck -f /dev/sdb2 # First check file system!
$ resize2fs /dev/sdb2
To resize a NTFS file system use ntfsresize command.
e.g.
$ ntfsresize -s 20G -n /dev/sda1
* /dev/sda1 is the device node of the NTFS partition to be resized.
* The "-n" options tells ntfsresize to not perform any action.
ntfsresize will proceed to test if resize is possible. If the test
comes back successful then run without the -n option.
$ ntfsresize -s 20G /dev/sda1
* The "-i" options will establish the lowest limit an NTFS file system
may be downsized to.
* A useful option is --info
Run the command first with the --info option to obtain info on the NTFS
size and partition size.
$ ntfsresize --info /dev/sda1
After resizing the filing system use parted to resize the partition accordingly.
* Run parted.
$ sudo parted
* It will place you in interactive (shell) mode.
Select device
$ select /dev/sdb
Change units to sectors
$ unit s
Perform the resizing
$ resizepart partnumber end
The argument partnumber is the partition number, and the argument
end is the desired end-of-partition in sectors (or whatever units
you selected).
Quit
$ quit
Use gparted to resize the partition on a GPT disk.
Note, gparted comes with a graphical interface and is fairly intuitive to use.
gparted will also resize the file system for you.
All this can be done while the partition you wish to resize is mounted
and the system using it is running (e.g. the root partition.)
If you wish to expand a file system on a virtual drive (e.g. *.vdi),
you can clone the virtual drive with the clone being contained in a larger
drive, afterwhich you can expand the file system.
In Virtual Box use
$ VBoxManage clonemedium ...
----------------
| UUID and LABEL |
----------------
A UUID is a 32 bit signature generated by the computer and assigned to a
device, that uniquely identifies it.
Since a significant part of this number is randomly generated, there is
very little probability that any two devices in the world will have the same
UUID. (Unless of course you clone a physical or virtual drive and do not
modify the UUID in the clone.)
This is particularly useful for identifying USB devices. Since the device
names are assigned when plugged in (e.g. /dev/sdd1, /dev/sdd2) then it is
merely the order in which they were plugged in that determines the device
name. Therefore it is not very useful to identify a USB drive in the fstab
file by a device node name.
The following entries in a fstab file will mount a USB device:
/dev/sdb1 /home/jdoe/usbdisk vfat ...
UUID=54A7-8DD9 /home/jdoe/usbdisk vfat ...
The first way, however, will mount any device with a vfat filesystem to
/home/jdoe/usbdisk, which would mean that you would have to plug in the
USB storage device in the correct order to have /home/jdoe/usbdisk be a
specific disk.
The second way avoids this problem by identifying the device not by its
name, but by its UUID.
To determine a storage device's UUID issue the command
$ ls -l /dev/disk/by-uuid
To determine a storage device's Label issue the command
$ ls -l /dev/disk/by-label
These will list all the storage devices both by their UUID or label.
The list is actually a list of symbolic links to the device node name.
e.g.
lrwxrwxrwx. 1 root root 10 Jul 11 22:12 rootfs -> ../../sda2
You can learn from this sample output that storage device with label rootfs
is in fact /dev/sda2.
Alternatively issue
$ dumpe2fs device | grep --color=never UUID
For example
$ sudo dumpe2fs /dev/sda3 | grep --color=never UUID
Both UUID and label will appear in the beginning of the output.
Alternatively issue
$ sudo blkid devicename
Note, the latter will also tell you what kind of filesystem sits on the device
(e.g. ext3, ext4)
Besides these, there are also various programs available on the internet to
display a device's UUID or label.
If a UUID or label is known, you can identify the corresponding device node in
/dev:
$ findfs LABEL=label|UUID=uuid
To generate a random UUID
$ uuidgen
Modifying UUIDs and labels:
* To modify the UUID of an ext2/3/4 file system:
$ sudo tune2fs -U device
Note, tune2fs can adjust many other file system parameters.
* To modify the label of an ext2/3/4 file system:
$ sudo e2label device [new-label]
* To modify the label for a NTFS file system:
$ sudo ntfslabel device [new-label]
* To modify the label for a swap partition:
$ sudo swaplabel [-L label] [-U UUID] device
-----------
| Swap Area |
-----------
The swap area is an allocation of disk space reserved by the OS to use in place
of RAM, when RAM has been exhausted.
To create a swap area
$ mkswap device
Where
* device is partition or file e.g. /dev/sda3
* Add -c option to check for bad sectors
* Add -L option to specify a storage device label instead of device file
Usually the swap is activated automatically during boot when specified in
/etc/fstab.
To manually turn on all the swap areas in /etc/fstab
$ swapon -a
Turn off all the swap areas
$ swapoff -a
To turn off a specific swap area use
$ swapoff specialfile
------------------------------
| ISO9660 and UDF file systems |
------------------------------
Although CDs (optical disks) were originaly used for storing high quality
music, optical media quickly became popular for storing computer data as well.
The ISO9660 standard defines a file system structure used in optical disks.
Today optical disks are quickly being replaced by USB dongles as live media
boot devices, and for general storage. Nonethless, the ISO9660 file system is
still popular for live media boot devices and operating system (OS) installers,
indpendent of the media being used. For instance, when downloading an OS to be
written to a USB dongle, the downloaded file bears the extension .iso, which is
short for ISO9660.
When booting a live OS or an OS installer in a virtual machine environment,
such a file can be specified as the CDROM image from which to boot.
The Rock Ridge extensions to ISO9660 removed some limitations of the original
ISO9660 standard, in particular allowing for longer filenames, symbolic links,
and more.
In this subsection I provide a few examples of things you might encounter with
regards to this file system.
To duplicate an ISO image from a CD/DVD drive (must be in the drive) onto an
image file:
$ cat /dev/sr0 > filename.img
Note, /dev/sr0 is usually the device node name (in /dev directory) for the
CDROM drive.
To create an iso9660 filesystem
$ mkisofs -o cdimage.raw tgtdir
The contents of directory "tgtdir" will get placed in the file system.
To create an iso9660 filesystem with Rock Ridge extensions (longer filenames,
symbolic links, etc)
$ mkisofs -R -o cdimage.raw tgtdir
To test if filesystem has been created correctly (without errors)
$ mount cdimage.raw -r -t iso9660 -o loop /mnt
Now change directory into /mnt and see if it contains what you expect to find
there. To unmount it
$ umount /mnt
To write onto a (writeable) CD
$ cdrecord -v speed=2 dev=/dev/sr0:1,0 cdimage.raw
Note, for correct device specification when using cdrecord utility, refer to
section Sound & Multimedia and scroll down to subsection CD Audio
where this utility is described in more detail.
To write onto a USB storage device
$ dd if=infile of=/dev/usbtgtdrive
Or add some options to make it faster, and also see progress
$ dd if=infile of=/dev/usbtgtdrive bs=4M status=progress conv=fsync
Warning! Make sure to specify output device (of=...) correctly. Writing to the
wrong drive will irreversibly overwrite its contents.
* Example of writing an image file onto a DVD
cdrecord -v -eject -dao speed=4 dev=/dev/sr0:1,0,0 FC-6-i386-DVD.iso
* To blank a rewritable CD/DVD optical disk
cdrecord --dev=/dev/dvdrw --blank=fast
A format that's meant to supercede iso9660 is UDF - Universal Disk Format.
* To mount a CD or DVD with such a file system issue the command
$ mount -t udf /dev/sr0 /mnt
* To create a udf file system:
$ mkisofs -o cdimage.udf -udf tgtdir
Note, Virtual Box doesn't recognize extension udf. Use iso extension instead.
* You can test image by mounting it as a UDF image.
$ mount cdimage.udf -r -t udf -o loop /mnt
---------------------------------
| Repairing a damaged file system |
---------------------------------
To repair a damaged file system use the command fsck.
Read the man page for more details on how to use this command.
Repairing a file system must be done with the file system unmounted.
This can be accomplished by booting the computer from a CDROM or flash drive and
repairing the damaged file system. This method is necessary when repairing
an essential root file system, such as /boot or /.
A damaged file system can also be repaired from the currently running instance
for non-essential file systems (e.g. /home or a that of a plugged in storage
device).
Sometimes when the OS enters into maintenance mode part way through the boot
cycle due to a damaged file system, you may want to repair it at that point.
For more about maintenace mode refer to this section and scroll down
to the subsection titled "Maintenance Mode".
Systemd and Services
********************************************************************************
* - Systemd and Services -
********************************************************************************
Many traditional Unix systems have used System V style init (initialization)
scripts. The purpose of any given script was to perform all the necessary
initializations and launching of a daemon process to handle functionality for a
desired service, such as networking, print spooling, etc. The init scripts are
usually executed as part of the boot process, although not necessarily.
Most Linux systems have migrated to a more modern approach of system
initialization and service handling known as Systemd, so that will be
described first.
---------
| Systemd |
---------
Systemd is a service manager that replaces (but compatible with) System V
style init scripts and run levels. Systemd manages daemons, which are services
that run in the background such as syslog, ssh, ntpd, and are
normally brought up at boot time. Systemd units are the basic building blocks
of Systemd. Files associated with these units are stored in the following
locations:
* /usr/lib/systemd/system
* /run/systemd/system
* /etc/systemd/system
Systemd units come in different types. The most common type that the
administrator interacts with is a service unit, which has a file extension
".service". For example sshd.service refers to the secure shell service.
The command systemctl is the user's interface with systemd.
To get a listing of all currently running services, issue
$ systemctl
To start/stop/restart a service
$ systemctl start|stop|restart foobar.service
Note, in systemd, init scripts are mapped to services.
Examples of using systemctl --
* To enable the ssh daemon service (at boot time). "enable" only takes
place after rebooting. However, it is persistent across reboots.
$ systemctl enable sshd.service
* To start the ssh service now, issue
$ systemctl start sshd.service
Note, "start" does not make the service persistent across reboots.
* To query if the ssh service is up and running
$ systemctl is-active sshd.service
* To list all installed services and their status
$ systemctl list-unit-files
To see which systemd services are launched at boot, issue
$ ls /etc/systemd/system/*.wants
If you find your computer's performance to be poor, try identifying if any of
the service daemons are hogging up the processor or memory.
It's not a good idea to simply disable a service if you don't know what that
service does.
It might be worthwhile to have a look at a basic Arch Linux installation to see
which services are running there. A basic Arch Linux installation will have
only essential services, like networking. Even services like a firewall or
ssh server are not automatically installed. This could give you an idea as to
which services you can eliminate to boost performance.
If playing around with services, its best to use "systemctl stop servicename"
to stop service only for the current run. If no problems arise by having
stopped the service, and you know you don't need that service, then disable it.
Warning: Do not stop or disable a service without being aware of what
that service does, and the implications of being without it.
--------------------------
| systemd-journald.service |
--------------------------
This service keeps a journal of all that happens on a running Linux system from
the moment it is launched (which is early on in the boot process) till when it
is terminated (close to the end of the shutdown process).
The command-line tool journalctl is used to query the journal.
Without options, the full contents of the journal will be displayed.
This is a useful tool in identifying the cause of errors in the boot process or
shutdown process, as well as identifying services which failed to start or
stop.
To show a log for the current boot
$ journalctl -b
The -b [ID][offset] option can be followed by an argument specifying for which
boot sessions to display the log.
An offset of 1 refers to first boot in journal; 2 refers to the second,
and so forth.
An offset of -0 refers to the last boot, -1 to the one before that, etc.
To add explanation texts from the message catalog append option -x.
See manpage for more.
Filtering is possible. For instance to filter by service issue something like
$ journalctl _SYSTEMD_UNIT=avahi-daemon.service
This filters all entries generated by the avahi-daemon service.
Equivalently
$ journalctl -u avahi-daemon.service
To show info on a unit
$ systemctl show unit-name # substitute for desired unit
Configuration of systemd services can be accomplished by editing the
relevant configuration file /usr/lib/systemd/system/servicename
e.g.
/usr/lib/systemd/system/systemd-binfmt.service
------------------------------------
| Partial list of available services |
------------------------------------
binfmt.service
bluetooth.service - handles bluetooth device communication
wpa_supplicant.service - handles WPA/WPA2 (wifi) negotiation.
(Future work - add a more significant list of services and brief explanation)
---------------------------
| Systemd - Further reading |
---------------------------
* Red Hat Documentation on Systemd
* Wikipedia article on Systemd
* Systemd homepage
* Why systemd?
See also man page for following topics and commands:
* systemd
* systemctl
* systemd.service
* init, telinit
------------------------
| System V type Services |
------------------------
System V (SysV) actually refers to a commerical version of Unix developed by
AT&T. Many Linux distributions initially used System V type init scripts to
bring up essential services in the boot stage. However, as mentioned above,
most Linux distributions today use systemd to launch and manage services,
and traditional init scripts have been replaced by systemd unit files.
But since systemd was made to be compatible with SysV init scripts, you may
still find SysV init scripts being used in a systemd based OS.
This section is relevant to you if you employ a system that uses SysV type init
scripts rather than systemd, or a systemd based distro that still employs some
SysV init scripts.
--------------------------------
| Starting and stopping services |
--------------------------------
To enable or disable various services at startup, edit /etc/inetd.conf.
To start and stop various services without restarting go into /etc/rc.d/init.d
identify the service you wish to start or stop, and issue
$ service_name stop
or
$ service_name start
In Debian based distributions, to permanently cause init.d services
to start or stop use "update-rc.d" command.
It will insert the appropriate symbolic links in the the /etc/rcX.d
directories.
$ update-rc.d servicename defaults
See man page for usage examples.
Additionally the "service" command can be used to restart (stop and start again)
For example, to restart cups
$ service cups restart
To displays status of all services
$ service --status-all
chkconfig is a command line tool for maintaining the /etc/rc[0-6].d directory
hierarchy.
To identify SysV type services that are launched at boot time, use
$ chkconfig --list
The output shows which SysV services are on/off and for which runlevels. This
can help determine which services start up in boot, and in what sequence.
Note, newer installations that have migrated completely to systemd will not
have services shown by chkconfig.
Runtime level
-------------
------------------------
| SysV - Further reading |
------------------------
* Difference between BSD and SysV Unix
See man page for command service.
Udev
********************************************************************************
* - Udev -
********************************************************************************
------
| udev |
------
A good reference on udev and writing udev rules can be found here.
The Linux system provides an interface between applications and devices
through file like nodes in the directory /dev.
Communicating with the devices is accomplished by writing or reading from
the device's "file" in /dev (e.g. /dev/sda, /dev/modem).
For example writing to a mass storage device (e.g. hard drive, flash disk)
can be accomplished by a command as simple as
$ cat file > /dev/sda
Warning! The above command will overwrite the begining of the storage
device with "file", which means the partition table as well as data on the
disk will be irreversibly wiped out.
To access the storage device on a partition basis, separate device files are
used (e.g. /dev/sda1, /dev/sda2).
It used to be that every single device to be excepted to attach to a system
was hardcoded into /dev.
With the proliferation of devices this mode of operation became unwieldy
and devfs was created to populate /dev with device files for only those
devices that were plugged in.
udev supercedes devfs.
Many drivers provided by vendors and Linux distributions come ready to use
out of the box. That is, no user configuration is required.
However, sometimes it is necessary for the user to intervene in the default
naming of a device file, or even creating the proper device file in the /dev
directory. This is where udev rules comes into play.
The tool udevinfo can be helpful in constructing a udev rule when
the top level device is known.
For example to get attributes for storage device /sys/block/sda, issue command
$ udevinfo -a -p /sys/block/sda
If udevinfo is not available in your installation then use the more general
administrative command, udevadm, to obtain info
$ udevadm info -a -p /sys/block/sda
(See below for more about this tool)
The attributes provided can be used to construct the udev rule.
For more about rule writing refer to the above referenced document.
----------
| udevadmn |
----------
The udevadm (udev administration) tool is a very handy tool for monitoring
what happens when devices are plugged in and unplugged.
Sample usage:
$ udevadm monitor [-k] [-u] [-p]
where
-k = Display only kernel events
-u = Display only udev events
-p = Display associated properties of events
It is beneficial to use udevadm simultaneously on a host and a virtual machine
while grabbing a USB device from host to guest and then relinquishing the
device. This can be used for debugging problems with capturing USB devices,
as well as helpful in writing udev rules.
Modules
********************************************************************************
* - Modules -
********************************************************************************
It used to be that device drivers were hardcoded into the Kernel, or the Kernel
was compiled with driver support for such and such devices. This has changed
with the proliferation of hardware and the need for a more dynamic method of
loading hardware drivers. Thus came into being a system of modules.
Modern Linux kernels load (or unload) drivers dynamically using loadable
modules. A good introduction to kernel modules can be found here.
This section introduces some of the commands used to manage modules, and
outlines some basic procedures for module configuration.
------------
| The basics |
------------
* To list loaded modules:
$ lsmod
* To get info on a module (including parameters you can pass to the module):
$ modinfo modname
* To load (and activate) a module:
$ modprobe modname
For example to load a module that provides scanner capability
$ modprobe scanner
To pass a paramter to the module:
$ modprobe modname paramtoset=value
(use modinfo command to see what parameters are available.)
To pass parameters to a module during boot, see subsection below on
configuration.
* To unload a module (i.e. remove a module from the running kernel.)
$ modprobe -r modname
-----------------------
| Ignoring dependencies |
-----------------------
Many modules depend on other modules to function. The command modprobe
intelligently installs or removes a module, taking into account dependencies.
The following two commands also install/remove modules, but do not take into
account needing or breaking dependencies. This may be desired when debugging
problems in a system.
* To install a loadable module in the running kernel, ignoring dependencies:
$ insmod modname
For example to install the parallel port zip drive module
$ insmode ppa
* To remove a loadable module from the running kernel, ignoring dependencies:
$ rmmod modname
--------------------------
| Configure module loading |
--------------------------
Modules and udev
----------------
The directory /etc/modprobe.d contains *.conf files that pass module
settings to udev inorder to manage the loading of modules during system
boot.
For modules that do not need to be loaded at boot time, use udev to load these
modules on a need basis. See udev section below about configuring
module loading with hardware detection.
Modules and initramfs
---------------------
Some modules are load by initramfs (the RAM file system initially used
by the Kernel to get things going. See About initramfs.)
mkinitcpio, which is an initramfs image creation utility, can be
used to configure module loading at the initramfs stage.
Parameters at boot
------------------
To pass parameters to a module during boot, create and edit the file
/etc/modprobe.d/desired_module.conf and insert the line:
options modname paramtoset=value
(Substitute for "desired_module" the name of the module you wish to affect.)
For more on setting module options refer to this webpage.
Blacklist modules
-----------------
Sometimes two modules may conflict with each other, in which case it may
be necessary to blacklist one of the modules.
In order to do this, create or edit a *.conf file in /etc/modprobe.d/
(say blacklist.conf), and insert line:
blacklist modname
For more on blacklisting refer to this webpage.
----------
| Firmware |
----------
Often the right module alone is not sufficient to make a particular hardware
device work. Drivers for hardware devices often need to load firmware onto
the device (e.g. a wireless adapter). The firmware may be proprietary or not.
If the firmware is not proprietary, then it may already be available as part
of the distribution or in its repository, in which case you will probably not
need to do anything else but load the modules.
If, on the other hand, the firmware is proprietary, and you are able to download
it from the website of the maker of the device, then you will need to install it
in the correct location. The directories in which firmware usually resides are
/usr/lib/firmware or /lib/firmware.
Sometimes it is necessary to research various forums to get a hold of which
firmware you require, and where to obtain it.
SELinux
********************************************************************************
* - SELinux -
********************************************************************************
--------------
| Introduction |
--------------
Security in Linux can be managed on three levels.
* Discretionary Access Control (DAC) -- This is the traditional file
mode permissions in Unix.
Files and directories are assigned read, write and execute flags,
for user, group and other.
* Access Control List (ACL) -- Expands on the former, by adding additional
access control capabilities.
See here and here for more about it.
* Mandatory Access Control (MAC) -- A more comprehensive method for
granting access and control to files and computer resources.
SELinux is one type of MAC available for Linux, and is the topic of this
section.
SELinux Manages Linux security. It extends the traditional mode permissions
that come with the Linux file system, and works with a preloaded policy. The
extended security defines additional security modes, allowing for more fine
grained control over who has access to a file, directory or application and
what that user can do with the file, directory or application, or what the
application itself is permitted to do.
Read more on SELinux at the SELinux FAQ webpage.
Read more on the differences between security systems implemented in Linux in
this article.
SELinux labels files and directories with a four level context descriptor
(this is referred to as a "MAC" style of security system).
This is similar to the standard permission flags found on any Unix/Linux filing
system (DAC).
The difference between DAC and MAC can be explained as follows:
In DAC a file with permissions labeled "-rwxr-xr--" means:
* user (owner of file) has read-write and execute permissions (rwx)
* group has read only and execute permissions (r-x)
* other (guest) has read only permissions (r--)
If a program is being accessed by user, that program has full read, write
and execute privileges with respect to that file.
A guest user (other) on the other hand has only read permission.
In contrast, SELinux does not automatically grant any program the ability
to operate on the file.
It rather uses context rules to determine what the program can do to the file.
For example, a samba server will not be allowed by SELinux to share a file
without the file being labeled "samba_share_t".
So even though the file might have read permissions for other, SELinux will
prevent a guest user from being able to see (read) this file from a Samba share.
However, a guest user who is not accessing this file from a share, but rather
from the local file system, will be able to see it.
For configuring SELinux for Samba, see Samba in netman.
For configuring SELinux for rsync, see rsync in netman.
Note, most commands here need to be run as sudo.
-----------------
| SELinux context |
-----------------
Context rules are the force behind security with SELinux.
They may prevent remote users or even other applications from reading or
modifying or executing files they have no business touching. Security is thus
enhanced, as malicious users or malicious software are prevented from accessing
and/or deleting user content.
Examples of changing context:
* To change context labels of directories or files
$ chcon -t type dir|file
where type is the SELinux context (e.g. samba_share_t, rsync_data_t).
* Add -R switch to descend recursively into a directory
$ chcon -R -t type dir|file
To view the context of a given directory or file use:
$ ls -Z dir|fil
See Samba in netman for more examples.
----------
| Booleans |
----------
To change the behavior of SELinux during runtime, one must set or unset
boolean variables.
The commands for probing a variable are
$ getsebool -a # For all variables and their states
$ getsebool boolean_var_name # For a specific boolean variable
To set a boolean variable, issue the command
$ setsebool boolean_var_name state (state=0/1)
---------------
| SELinux modes |
---------------
When enabled, SELinux can operate in one of two modes:
* enforcing mode
In this mode, SELinux refuses actions that are not in accordance with
the configured policy.
* permissive mode
In this mode SELinux does not refuse actions, but logs those actions
which would have been refused.
The latter mode is useful for debugging access problems that you suspect
may be SELinux related.
* If SELinux is disabled, you can enable it on a one time basis with
$ setenforce 1
* To place SELinux in permissive mode on a one time basis
$ setenforce 0
To configure the SELinux mode at boot, edit the file /etc/selinux/config.
Look for a line
SELINUX=...
* To implement enforcing mode the line should be
SELINUX=enforcing
* To implement permissive mode the line should be
SELINUX=permissive
* To disable SELinux
SELINUX=disabled
-------------------------
| Managing SELinux policy |
-------------------------
semanage is the utility that is used to manage SELinux policy.
...
-----------------------------
| Logging and troubleshooting |
-----------------------------
When troubleshooting with SELinux, it is useful to first identify if SELinux
is indeed the cause of the problem.
The first step is to put SELinux into "permissive mode"
$ setenforce 0
If the problem goes away, then SELinux is indeed the cause.
Open the log file /var/log/audit/audit.log with a file pager or editor,
and search for activities in your log that match the service or action you are
troubleshooting (e.g. samba, rsync)
If SELinux was totally disabled, then put SELinux in enforcing mode and repeat
the problematic action, and then look at the log.
To get a more human readable and friendly account of the log events use a
utility called sealert. This utility will not only display the log item
in a more verbose and readable manner, it will also suggest what needs to be
done to fix the problem.
Note, this utility is part of the troubleshoot daemon package, and will need
to be installed. In Fedora the package is setroubleshoot-server.
Example:
$ sealert -a /var/log/audit/audit.log
Note, when running sealert without specifying the log file, I get a message:
"could not attach to desktop process".
The above invocation will generate diagonstics for all entries in the audit.log
file, and will thus take some time to generate.
It is more sensible and much faster to copy the relevant log entries from
audit.log into a temporary file (e.g. select_audit_entries.txt), and run sealert
on it.
$ sealert -a /tmp/select_audit_entries.txt
For more about how to undersand and interpret audit.log entries and the sealert
command, refer to this tutorial.
--------------
| Miscalaneous |
--------------
Sometime it may be necessary to cause SELinux to relabel the file system.
For instance, this may be necessary when relocating the home directories to a
different parition or disk.
To cause SELinux to relabel the file system on boot:
Create blank file: /.autorelabel
(make sure directory /home/user is owned by "user")
Note, relabling a file system will undo any context changes you have made.
To sustain changes accross relabels ...???
To check SELinux status:
$ sestatus
SELinux does not allow a shared library or ELF binary to place its instructions
on the stack.
The program "execstack" can be used to query whether a specific library or
binary requires the stack or not:
$ execstack -q libpath/libname
It can also request of the library not to require the stack:
$ execstack -c libpath/libname
See man page for more.
Power Management
********************************************************************************
* - Power Management -
********************************************************************************
Modern Linux installations that come with a graphical desktop (e.g. GNOME, KDE,
LXDE, etc.) offer buttons to manage the power states of the computer (i.e.
power off, reboot, suspend, hybernate.)
Power management commandline commands are also available. This is particularly
useful for no desktop environments and scripting.
* shutdown
Can be used to halt, power off or reboot machine. Also takes time argument
(how much time to action), and message to display on user consoles (see wall
command below.)
shutdown also has shortcut commands that implement one of the power state
changes.
$ halt
$ poweroff
$ reboot
wall is a utility that sends a system wide message, and is often used to
warn users when a multi-user system is being shutdown.
Relevant directories/files for power management:
/etc/pm
...
------------
| Suspending |
------------
Linux also allows suspending a machine on hardware that supports this feature.
Suspending involves "freezing" the operation of the computer until activated
again by a press of a button or key or some other trigger.
In this mode the state of the processor and other essential components and
processes are saved into memory, while only essential components are kept
powered up (e.g. RAM).
The suspend feature offers very low power consumption, while enabling the
machine to quickly (within a second or two) go back to being completely
operational.
Hybernation is also a supported feature.
In hybernation the state of the processor and memory are saved to disk, and
the machine totally powered off. On resumption the processor state and RAM
are restored from disk. With hybernation, restoration is significantly longer
than with suspension, but no power is consumed.
Some of the relevant commands to implement these features in Archlinux:
/usr/sbin/pm-suspend - Shutdown most computer services. Low power consumption.
/usr/sbin/pm-hibernate - Save computer state to disk and completely shutdown.
No power consumption.
/usr/sbin/pm-suspend-hybrid
Scheduling Tasks with Cron
********************************************************************************
* - Scheduling Tasks with Cron -
********************************************************************************
Cron is a service/daemon that handles job scheduling in Unix systems.
An administrator might use cron to performed automatic scheduled backups.
An ordinary user might choose to have cron post to himself important reminders,
perform a weekly cleanup of a certain directory, or send an automatic email
every Tuesday and Thursday to a group of co-workers.
In order to use cron, first enable and start crond (the cron daemon)
$ systemctl enable crond.service
$ systemctl start crond.service
---------
| crontab |
---------
crontab is the utility that a user invokes to edit a cron schedule.
Each user on a system has one schedule. Any number of scheduled tasks can be
placed in that schedule (examples to follow).
To edit your cron schedule
$ crontab -e
To display the cron schedule
$ crontab -l
To remove your cron schedule
$ crontab -r
To load a cron schedule from a file
$ crontab filename
To edit a cron schedule for a user other than you
$ sudo crontab -u username -e
---------------------------
| Format of crontab entries |
---------------------------
General format of a crontab entry:
* * * * * command to be executed
- - - - -
| | | | |
| | | | +----- day of week (0 - 6) (Sunday=0)
| | | +------- month (1 - 12)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)
For example, the line
*/2 * * * 0-4 /home/jdoe/bin/mailcheck
will tell cron to run the command mailcheck every 2 minutes from
Sunday (0) to Thursday (4).
Note: /2 means "every two", which for the first entry would be minutes
The following will run mailcheck on the hour, every hour.
0 * * * 0-4 /home/jdoe/bin/mailcheck
whereas the following will run mailcheck five minutes past the hour, every hour.
5 * * * 0-4 /home/jdoe/bin/mailcheck
In general an asterisk specifies running the command every
hour/day-of-month/month/day-of-week.
A number specifies a specific hour/day-of-month/month/day-of-week.
If you would like to execute multiple commands use "&&" to separate them.
e.g.
* 00 * * * * echo `date` > /tmp/log.txt && sleep 60 && echo `date` > /tmp/log.txt
This will print current date into /tmp/log.txt and a minute later will do the
same.
Note: If any command in the sequence fails, the rest of the line will not be
executed.
For more details and examples refer to the man page of crond and crontab.
$ man crond
$ man crontab
Sendmail - The Unix email server
********************************************************************************
* - Sendmail email server -
********************************************************************************
Sendmail is an email server software for Unix that has been around since the
1980's. There are quite a few other email servers for Unix/Linux systems.
In this section I focus on sendmail, and briefly discuss related topics and
utilities.
Configuring sendmail is done in one of three ways:
1) Edit the /etc/sendmail.cf file directly
2) Edit the /etc/sendmail.mc file and compile it:
m4 sendmail.mc > sendmail.cf
3) Run "mailconf"
Once configured you can stop and restart the sendmail daemon as follows:
$ systemctl restart sendmail.service
If not enabled, then enable it
$ systemctl enable sendmail.service
With init scripts, stop and restart as follows
$ /etc/rc.d/init.d/sendmail stop
$ /etc/rc.d/init.d/sendmail start
------------------------------
| Important configuration info |
------------------------------
If you want to masquerade as a different host then include (in sendmail.cf)
DMmasquerade_host in sendmail.cf
To specify a server to handle local mail
DMlocal_mail_server_host
To specify a smarthost to handle mail to the outside world (the smart
host should know how to route such mail). Note that the smart host
should recognize your host (your hostname should be in its DNS
server), otherwise the SMTP transaction will terminate prematurely.
DSsmart_host
If you run the imap daemon (imapd) then you ...
The mail file which sendmail directs its mail to is stored in
MAIL environment variable.
e.g
MAIL = /var/spool/mail/jdoe
-----
| NFS |
-----
If you have an NFS mail account (mail folder that resides on an NFS server) you
need to mount it via fstab inorder for it to be accessible:
e.g. in /etc/fstab have the line:
mailserver.mycompany.com:/var/spool/mail /var/spool/mail nfs soft,rw 0 0
Where mailserver.mycopany.com is the name of the mail server (the IP address of
the server also works, but IP addresses are more likely to change than server
names).
If you have local email accounts stored in /var/spool/mail, then mounting
the NFS mail account as above would cause the local accounts to become
inaccessible.
In that case you will need to mount the NFS account elsewhere.
e.g. in /etc/fstab add the line:
mailserver.mycompany.com:/var/spool/mail /var/spool/mailnfs nfs soft,rw 0 0
In this example all mail accounts in NFS will be found in the
/var/spool/mailnfs directory
------------------------
| Useful email utilities |
------------------------
If you have an email account on someone else's server (e.g. a Gmail account),
you may want to use the utility fetchmail to download those emails
into your own computer's email file.
You can poll for new emails as such
$ fetchmail -d poll_seconds
Note: run it as root. make sure .fetchmailrc is configured correctly.
To download the emails using the POP3 protocol, issue
$ fetchmail -p POP3 -u username emailservername
You will be prompted for a password.
procmail is a utility that processes incoming mail.
Some email notification utilities:
* xbiff - classic mailbox image, polls /var/spool/mail etc...
* trysterobiff - nonpolling imap capable notifier
* gbubiff
* asmail
-----------------
| Troubleshooting |
-----------------
In case of difficulties with the local mail system start by looking in the
file: "/var/log/maillog"
To speak directly with sendmail or whatever mail delivery agent is being
employed you can connect to port 25 via telnet and speak to it directly:
$ telnet mailserver 25
e.g.
$ telnet localhost 25
Once I'm connected to port 25 I can interact with it via a set of port 25
commands. Type HELP for a list of topics and HELP TOPIC for help on a
particular topic.
Note, today most mail server are secure and do not use port 25.
Rather use port 587 with TLS encryption.
For more about SMPT and ports refer to this blog.
Printing
********************************************************************************
* - Printing -
********************************************************************************
------
| CUPS |
------
CUPS provides computers running Unix like OSs the ability to act as a
print server. Fedora 3.0 and on uses the CUPS system.
CUPS provides a print spooler/scheduler service, and converts files sent for
printing to the language understood by the printer (e.g. postscript).
See this Wikipedia article for background about CUPS, and on computer printing
in general.
CUPS provides a web access interface via port 631 at URL
http://localhost:631/
For documentation browse URL
http://localhost:631/overview.html
CUPS works with both applications and line commands.
Example of line commands:
* To see a list of available printers and their current status
$ lpstat -p -d
* To print to a specific printer
$ lp -d printer filename
$ lpr -P printer filename
* Passing printer options
$ lp -o landscape -o scaling=75 -o media=A4 filename.jpg
$ lpr -o landscape -o scaling=75 -o media=A4 filename.jpg
$ lpr -o page-ranges="1,3-5,16" filename.jpg
To check status of print jobs on Myprinter:
$ lpq -P Myprinter
Sun: lpq -PMyprinter
HP: lpstat -PMyprinter
To remove a print job from the print queue:
$ lprm -P Myprinter job_number
You can identify the job_number with the lpq command above.
Sun: lprm -PMyprinter job_number
HP: rcancel job_number Myprinter
Note the files may be of any format. Cups will recognize the format and send
it through the appropriate filter and driver.
Many other printing options are available.
See http://localhost:631/sum.html
Options can be passed to lpr. For example
$ lpr -o sides=two-sided
--------------------------------------
| Printing related configuration files |
--------------------------------------
Note, use the system-config-printer utility to configure settings rather than
by direct editing of the files.
* CUPS directory
/etc/cups
* Printers which are accessible to CUPS, and settings
/etc/cups/printers.conf
* Printer information (automatically generated by CUPS daemon)
/etc/printcap
* Edit this file to specify a default paper size
/etc/papersize
----------------------------------
| Utilities to help conserve paper |
----------------------------------
Note, some of the commands mentioned here and in subsequent subsections may not
be available for your Linux distribution.
To create a space efficient postscript file from text files, that is, place a
few pages worth of print on one page:
$ enscript -pfilename.ps -2rG filename.txt
To print directly to printer "Myprinter"
$ enscript -dMyprinter -2rG filename.txt
Another method using mp command:
$ cat filename.txt | mp -lo > filename.ps
The mpage command can arrange either text files or postscript files a few
pages per page.
$ mpage -ba4 -2 filename.txt > filename.ps
$ mpage -ba4 -2 filename.ps > filename2.ps
The "-2" option tells the command to arrange two pages of print per page.
In the first example the input is a text file, and in the second example a
postscript file.
Can also use the psnup command to place a few postcript pages per page.
$ psnup infile outfile
Some useful options:
-nup num_slides_per_page .... Number of input pages per page
-l .......................... Landscape mode -- rotated 90 anticlockwise
-r .......................... Seascape mode -- rotated 90 clockwise
-s scale_factor ............. scale_factor is a decimal number (e.g. 1.5)
-mmargin .................... Add more margin to page # e.g. -m0.5cm
-b .......................... Place line border around pages
------------------------
| Screenshot/Screen dump |
------------------------
To print screen dump (screen capture or screenshot)
A few commands are available for dumping the contents of an X-Window to a file
or printer:
* Dump to a file
$ xwd -out filename.png
* Same as xwd, except it sends the screenshot to the printer
$ xdpr -device ps -Pprintername
* Dump window to ppm format file
$ xwintoppm -id window_id > outfile.ppm # use xwininfo to obtain window ID
The ImageMagick suite also offers a utility for taking a screenshot and saving
to a file:
$ import filename
e.g.
$ import mywin.gif
Any format supported by ImageMagick can be specified as output (e.g. jpg, png.)
---------------
| Miscellaneous |
---------------
Utilities to convert a text file to other formats
* pandoc - Pandoc is a Haskell library for converting from one markup
format to another, and a command-line tool that uses this library.
* unoconv - Converts any document from and to any LibreOffice supported
format.
To convert a raw data image file to a postscript file:
$ ipscript
(This command might no longer be available).
To convert a postscript file to pdf
$ ps2pdf input.ps output.pdf
To convert an eps file to pdf (preserves bounding box)
$ epstopdf input.eps output.pdf
To convert a pdf file to postcript.
$ pdf2ps input.pdf > output.ps
Another option is this poppler utility, wich allegedly does a better job
$ pdftops
To save a plot as an encapsulated postscript file from Matlab command line:
$ print [-deps] filename.ps
Modems
********************************************************************************
* - Modems -
********************************************************************************
If you have a fax modem or GSM modem, then you may find this section useful.
----------
| Hardware |
----------
If you are planning on using a modem with Linux or another Unix-like OS, then
its best to purchase a conventional modem rather than a softmodem (sometimes
called winmodem). Softmodems come with minimal hardware leaving most of the
processing to the OS, which means they require a specialized driver that is
usually not available for Linux. Softmodems that support Linux are often
referred to as Limmodems.
For more about softmodem read here.
---------
| minicom |
---------
In many cases it will be necessary to communicate with the modem manually via a
serial terminal. minicom (or xminicom) serves that purpose for Linux systems
and putty for Windows. Modems use AT commands to communicate with modem
software (see more about that in subsection below), and one can enter
AT commands via minicom to communicate with it.
As an aside, sending and receiving files from one minicom to another end
connection (e.g. another minicom session running on the dialed number's
location) can be done with protocols such as xmodem, ymodem, zmodem or kermit.
These can be configured within minicom, in particular to specify the location
of the kermit executable and which options to pass to it.
Note, there are two versions of kermit you can install:
* gkermit (GNU kermit) is less feature rich.
* kermit (ckermit) is the classic kermit and is more rich in features.
Use kermit command to launch ckermit.
Initially minicom should be started in setup mode (with root privileges).
$ sudo minicom -s
After setting everything up save configuration as dfl file.
If running as root it will save it in /etc.
If running as an ordinary user, use "save as dfl", and select some local
location to save to.
-------------
| AT commands |
-------------
The most basic or classic set of AT commands are known as "Hayes AT commands",
after their originator. Many extensions to the AT commands have been
introduced as different kinds of modems evolved. We list here only a few basic
examples of such commands.
ATE0 - Turn off echo of AT commands on screen
ATE1 - Turn on echo of AT commands on screen
ATS0=2 - Answer on second ring
AT+FCLASS=8 - Puts modem in voice mode (=0, =1, =2 puts it in data and
fax classes 1/2 mode respectively)
AT+VSM=? - Lists codecs for audio
AT+VSM - Set audio codec
AT+VTX - Begin audio transmission
AT+VLS=0, AT#VLS=0, ATH - Any of these commdands terminate the voice call but
remain in voice mode
ATZ - Terminate voice call and get out of voice mode
ATD[;] - Dial , use ; for voice call regardless of +FCLASS
Note, there are many internet references on AT commands.
--------
| Mgetty |
--------
For complete information on mgetty, type
$ info mgetty
The basics for configuring mgetty are described here.
We'll assume here the modem device is configured as ttyS2 (COM3).
Include at the end of /etc/inittab the line "S1:23:respawn:/sbin/mgetty ttyS2"
You can also manually run "/sbin/mgetty ttyS2" from a root shell.
Edit the file /etc/mgetty+sendfax/mgetty.config to set mgetty parameters.
Spawning of mgetty can be done without turning off the computer by sending
computer to run level 2
$ sudo init 2
Note, this will boot you out of your graphical desktop.
Mgetty works with other programs simply by looking in /var/lock for a lock
file on the serial port (e.g. "LCK..ttyS2"). If it finds one then it leaves
the modem alone, and if there is none it will attend to any incoming calls.
----------------
| Mgetty-Sendfax |
----------------
Mgetty-senfax is a suite of tools for sending and receiving faxes.
Note, efax and efax-gtk offer an alternative (and easier) faxing solution
See below for more.
To use mgetty-sendfax you'll need to install it from your repository.
Once installed, edit the configuration file /etc/mgetty+sendfax/sendfax.config,
and configure as desired.
The fax-devices parameter should be configured to your modem port
(e.g. ttyS2, ttyACM0)
See subsection below titled "Trendnet fax/modem" on how to obtain the modem
device name for a USB modem.
For a PCI or PCI-E modem you get get various information on it by issuing the
command
$ lspci -v
and identifying the entry for the modem.
Fax documents are sent and received using the g3 image formats (*.g3).
Here are suggestions for a few case scenarios for converting to the g3 format:
* Case I (A scanned image)
Scan the image as a grayscale 200 dpi image using pnm format.
Use an image editor such as gimp to threshold the image between 200 and 255,
and save as pbm.
Note, threshold values may vary for different image intensities and contrasts.
Note, see below for how to threshold using command line utilities.
Convert to g3 format using
$ pgmtopbm file.pnm | pbm2g3 > file.g3
According to man page pgmtopbm is absolete, and one should use pamditherbw
instead.
* Case II (A postscript file)
Use ghostscript to convert a postcript file to fax file format g3:
$ ghostscript -dNOPAUSE -sDEVICE=faxg3 -sOutputFile=file.g3 file.ps
Note, this method doesn't seem to work properly. The faxes it produces
don't seem to get accepted. Perhaps another fax driver is needed. Perhaps
when there is grayscale or color in the document, the resulting fax document
is incompatible with some fax machines.
Tried converting to pnm and then to g3 via method below and noted a resolution
mixup.
In short, I don't have a good handle of this method.
* Case III (A pdf file)
Use pdf2ps to convert to postscript.
$ pdf2ps input.pdf output.ps
or
$ pdf2ps input.pdf > output.ps
Continue with method II.
* Case IV (A latex file)
Compile latex file using latex.
Convert to postscript via dvips.
Proceed as for case II.
Alternatively, compile using pdflatex to generate a pdf file.
Proceed as with case III.
* Case V (xournal file)
Save xournal file.
Press print button.
Select print to file and save as file.pdf
Note, when I used "export to pdf" and then tried to convert to g3, for some
reason it failed. This is probably no longer problematic.
Note, if you have the ImageMagick suite installed on your computer, you can
use its utilities to both view and convert to and from g3 format.
You can concatenate multiple g3 files with
$ g3cat file1.g3 file2.g3 ...
Use the sendfax command to send a fax:
$ sendfax file.g3
Sending multiple pages:
$ sendfax file1.g3 file2.g3 ...
You can also use the fax spooling accessory.
By default sendfax is set for root permissions only. Set to user permissions
if you wish to run as non administrator.
Faxes can be viewed with viewfax.
To print out you must first convert to pbm format and then to ps or pdf
$ g32pbm file.g3 | pbmtolps > file.ps
Also available is the efix utility to convert between fax formats and other
formats, and in reverse.
Basic usage is
$ efix file
The default behavior is detect the file type automatically, and output to
standard output in tiffg3 format.
Use the -i and -o options to specify input and output format. For example
$ efix -i pbm -o fax file.pbm > file.g3
For multiple page faxes use the -n option to specify a naming scheme for the
output files. For example
$ efix file1.pbm file2.pbm file3.pbm -n "out%02d.g3"
will produce pages out01.g3, out02.g3 and out03.g3, respectively.
The -n option string is a printf type pattern.
Often your document will be in pdf format, and efix does not work with pdf.
So prior to feeding your document through efix you will need to convert it to
tiff or pbm format. I'll work with pbm.
Use the command pdftoppm
$ pdftoppm -mono doc.pdf > doc.pbm
The -mono option will ensure mono-chromatic output, which will make your
document compatible with all fax machines. It works by applying a dithering
filter to the input.
However, sometimes the dithering may cause the document to be unreadable.
In such a case, I suggest to first threshold the doucment using ImageMagick or
gimp.
Note efix may fuss about certain things. For example you may receive
a message: "Error: PBM width must be multiple of 8"
In this case you will want to crop the image to a width that is a mulitple of 8.
First, determine the dimensions of the image. Assuming you already converted
it to pbm format (wihtout altering its width), invoke the command
$ file doc.pbm
You'll get something like
doc.pbm: Netpbm image data, size = 1275 x 1651, rawbits, bitmap
Now, 1275 is not divisble by 8, but 1272 is. So re-invoke pdftoppm with the
-W option, specifying the crop width
$ pdftoppm -mono -W 1272 doc.pdf > doc.pbm
See man page for more
$ man efix
------
| efax |
------
efax is a command line utility for sending and receiving fax documents.
It is an alternative to sendfax.
For a non-root user to use efax add the user to the group "dialout" or "uucp",
whichever is correct for your system. If you don't know which it is, you can
determine it by listing your modem device node as it appears in /dev:
$ ls -l /dev/ttyACM0 # Substitute for your modem device
crw-rw----. 1 root dialout 166, 0 Jul 7 17:05 /dev/ttyACM0
As apparent from the output, its "dialout".
Sample usage:
$ efax -d /dev/ttyACM0 -l "+1 215 999 9999" -t "+1,800,9999999" form.g3
* The -d option specifies the modem device to use (default is /dev/modem).
* The -l option sets the local identification string, which should be the local
telephone number in international format.
* The -t option is the telephone number to dial. You may use commas to pause
between segments within the telephone number.
The file(s) to be sent should follow the number.
The fax file should be of an acceptable format, in particular one handled by
the efix utility.
See "FAX FILE FORMATS" in efax man page, as well as efix man page.
----------
| efax-gtk |
----------
efax-gtk is a graphical frontend to efax.
When running the GTK version on a non-gnome window manager add
"export NO_AT_BRIDGE=1" to the user's profile.
This takes care of some GTK issue. Otherwise you get an error -
"Couldn't register with accessibility bus: ..."
Note, you can use the utility cu to communicate with and test a modem type
device with AT type commands.
Note, the Fedora repository doesn't include efax-gtk.
I installed the rpmsphere repository so I can install from it efax-gtk.
However, Fedora's efax didn't come with -u option (utf8 enabled for efax-gtk
frontend).
So I copied the efax-0.9a from an archlinux installation to replace the
efax program in the Fedora installation.
-------------
| Voice Modem |
-------------
We'll assume here the modem device is accessed through /dev/ttyACM0.
To determine if your modem supports the voice feature open up a terminal with
minicom. (Be sure to set up the modem port correctly, e.g /dev/ttyACM0.)
In minicom, type (without $, that's just the prompt):
$ AT+FCLASS=?
* To put the modem in voice mode type:
$ AT+FCLASS=8
(=0, =1, =2 puts it in data and fax classes 1/2 mode respectively)
* To dial a number in voice mode, type:
$ ATD;
The semicolon forces voice mode.
Different voice modems support different audio codecs
To enquire which codecs are supported, type
$ AT+VSM=?
To set a specific codec, type
$ AT+VSM=codec_name (as given by previous command)
If using a pcm 8 bit codec, you can convert the sound from the source
format (say .wav) to the specific codec using SoX
$ sox testing.wav -t raw -r 8000 -b 8 testing.raw
To verify it plays correctly:
$ play -t raw -r 8000 -b 8 -e signed-integer testing.raw
Vgetty is an extension to Mgetty-Sendfax allowing one to use a voice
capable modem to work as an answering machine.
Install it if necessary. In Fedora
$ sudo dnf install mgetty-voice
Check out the man page to learn more about it
$ man vgetty
--------------------
| Trendnet fax/modem |
--------------------
Trendnet manufactures USB fax modems.
When configuring minicom or mgetty-senfax or efax to use your modem, you'll
need to know the device name of the modem (e.g. ttyACM0).
A fairly easy way to get that info and more is to plug in the modem and type
$ dmesg | tail -n 10
You should get output that resembles (time stamps deleted):
usb 2-1.5: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 2-1.5: Product: USB Modem
usb 2-1.5: Manufacturer: Conexant
usb 2-1.5: SerialNumber: 24680246
cdc_acm 2-1.5:1.0: ttyACM0: USB ACM device
usbcore: registered new interface driver cdc_acm
cdc_acm: USB Abstract Control Model driver for USB modems and IS
On the fifth line appears the device node name ttyACM0 created for this
device.
I saw a post where someone was using a Trendnet modem that wasn't being
recognized in this page.
He gives advice similar to the above:
Issue the command
$ modprobe usbserial vendor=0x0572 product=0x1329
Note, vendor and product codes are specific for the modem discussed in the post.
$ dmesg | tail -n 100
Look for USB Modem device node name.
-----------------
| About Fax Class |
-----------------
Faxes are classified by class. Currently there are Class 1, Class 1.0, Class 2
and Class 2.0. For a short description about each refer to here.
The following is an excerpt from this webpage.
--- Start of Excerpt
2) You can use "HyperTerminal" or any similar terminal program to determine
whether your modem is Class2 or Class2.0.
Type:
AT+FCLASS=?
and if you receive something like:
0, 1, 2 or 2.0
as a response, then your modem can be configured as Fax Class1 or Fax Class2.0
modem. But if you receive something like:
0,1
then this is the Fax Class1 modem.
3) In short, when Class 2/2.0 command set is used, most of the fax session is
controlled by a fax modem built-in firmware, and only fax session setup by PC
software, e.g. AFax Server.
When Class 1 command set is selected, all of the work is pushed to PC software.
On multitasking systems, such as NT and Unix, and with multiple fax modems
attached, using Class 2/2.0 command set is recommended.
Trade off using Class 2/2.0 over Class 1:
* - less control over fax session if not handled well by modem
* - initial connection # - not all modems support Class 2 or Class 2.0
* + less CPU ...
--- End of Excerpt
--------------
| Serial Ports |
--------------
Serial ports used to be the primary method by which communications with
peripheral devices took place. Today, the much faster USB port has almost
completely replaced the more clumsy and slower serial port as a means of
communicating with peripheral devices. However, USB modems (as well as modems
that plug into the computer's expansion slots) that communicate over the
telephone wire still need to communicate with the CPU using the same I/O
address and interrupt request line (IRQ) scheme that was used then, and must
therefore emulate this communication scheme.
Originally, a motherboard had the capacity to accomodate four serial ports. A
motherboard customarily came with two physical serial ports. The BIOS could
accomodate an additional two serial devices connected to ISA or PCI expansions
slots.
In Windows the four ports are designated, COM1, COM2, COM3 and COM4. In Unix
they are designated by their device node names in the /dev directory:
ttyS0, ttyS1, ttyS2 and ttyS3.
In the days of old a PCI modem had a jumper used to set the COM port to which
the modem should connect to. Today this would be done in software.
Each port has an associated I/O address and IRQ:
* COM1: 0x3F8, IRQ4
* COM2: 0x2F8, IRQ3
* COM3: 0x3E8, IRQ4
* COM4: 0x2E8, IRQ3
Note, COM1 and COM3 share IRQ4, and COM2 and COM4 share IRQ3
Modern systems are capable of accomodating many more I/O addresses and
interrupts. In my current installation there are 32 ttyS* device nodes in /dev.
For more about serial ports read chapter four of the Serial Howto.
Also refer to this Wikipedia article.
To set the properties of a serial port use the setserial command.
$ setserial device commands
For example, to set COM2 to use IRQ 4, issue:
$ setserial /dev/ttyS1 irq "4"
Other commands are listed in the setserial manpages or by issuing the command
setserial without arguments.
If a serial device is configured incorrectly at startup (e.g. wrong IRQ) then
if you are using SysV style scripts, you can include the setserial command in
the file /etc/rc.d/rc.serial, to modify the IRQ being used. If this file does
not exist then it should be created.
Note, this file is called from the file /etc/rc.d/rc.sysinit.
A sample file:
#!/bin/sh
/bin/setserial /dev/ttyS2 irq "3"
You can also use udev to correctly configure your serial device.
File Recovery
********************************************************************************
* - File Recovery -
********************************************************************************
This section discusses methods for file recovery from a damaged file system,
or files that were mistakenly deleted.
Option 1 - Using debugfs
* Unmount file system from which you want to recover files.
* Then mount with debugfs
$ debugfs -w /dev/sd* # subsitute correct partition
This will launch the debugfs shell.
* At debugfs prompt type
lsdel
(this should list nodes that have been deleted)
* Can use dd to recover file.
For the rest of the procedure refer to the
Linux ext3 ext4 deleted files recovery HOWTO
* Can also use dump filename (also in debugfs shell)
See this webpage for more.
Option 2 - Use 3rd party software
The testdisk package contains two programs.
* testdisk - Recovers lost partitions and makes non-booting disks bootable
again.
* photorec - Used to recover files.
Note, PhotoRec recovers files to the directory from which it is run.
Don't have the partition you want to recover from mounted when using it.
It is possible to select file types to recover within photorec's menus.
Kernel
********************************************************************************
* - Kernel -
********************************************************************************
Some things to keep in mind about the Linux Kernel:
* The Linux kernel is stored in /boot.
Usually its name is some like vmlinuz, followed by a version number.
* Multiple linux kernels can be stored in /boot with corresponding initrd
or initramfs images.
For more about initrd refer to here.
* To boot a specific kernel, add a menu entry in grub.cfg.
See section Bootloader for more about Grub.
* The root file system partition that the kernel will mount is specified as one
of its arguments. For example
linux /vmlinuz-version root=LABEL=myrootpart ro ...
The above line in grub.cfg instructs the bootloader to launch the Linux kernel
contained in file vmlinuz-version, and tells it to look for a
paritition whose label is myrootpart and mount it as the root partition.
* Make sure the root file system was originally formed with that kernel or
a kernel version that is not too distant, otherwise incompatibilities will
arise and the system will not boot properly or panic.
* Virtualization software build their own kernel modules, meaning they
require kernel headers and gclib that matches the running kernel version.
Installing kernel headers and so forth that do not match the running kernel
will be detected by the virtualization setup software and will refuse to
build the module.
Relevant directories for the kernel is /usr/src/kernels/kernelversion
In Fedora (and rpm based systems) you can download the kernel headers using
dnf (or yum):
$ dnf install kernel-headers kernel-devel
This will install latest version. Make sure you are running latest kernel
or at least install it. If you want to continue running the old kernel then
download an rpm package and install manually.
$ rpm -i kernel-headers-version.rpm kernel-devel-version.rpm
Note, it may be possible the repository contains older versions of the
kernel headers.
If the versions of the kernel headers and running kernel match, then the
virtualization software should compile the modules without issue.
* The linux kernel takes arguments.
For instance in grub.conf you may have a line such as:
linuxefi /vmlinuz root=UUID=aee15933-bea5-1e48-a3ed-e317b1848132 ro rhgb quiet LANG=en_US.UTF-8
Following vmlinuz (the kernel file) follow options which are passed to the kernel.
Some of the common arguments are
* root=
Tells the kernel the partition that contains the / directory
e.g. root=LABEL=myrootpart
e.g. root=UUID=...
* ro
Mount (initially) read only
* quiet
Do not display all the boot messages. Otherwise the same messages that are
recorded in dmesg are displayed as the system boots.
* LANG=
Set language setting
Some other arguments
* nomodeset
Use simple graphics mode when booting. Otherwise, the kernel tries to
figure out which driver to use to take advantage of the graphics chipset's
capabilities.
Note, during (non-quiet) boot a message of the sort
"fb: switching to radeondrmfb from simple" may appear.
This is where the kernel is switching from a generic graphics driver to the
more specific driver for the computer's graphics chipset.
Use nomodeset to diagnose problems caused by a lack of a working driver
for your graphics card.
* 3
For a system using systemd, if "3" is placed as the final arguments, then
the system will not automatically boot into graphical mode.
UEFI
********************************************************************************
- UEFI -
********************************************************************************
When a computer is first turned on, the processor begins to fetch instructions
from a non-volatile memory chip.
In older IBM and IBM compatible PCs this memory chip is known as the BIOS.
This chip sits on the motherboard and contains the firmware (embedded software)
necessary to initialize the processor and memory with what is needed to get an
operating system up and running.
---------------
| BIOS vs. UEFI |
---------------
The BIOS firmware includes basic drivers for interfacing with peripherals (e.g.
keyboard), storage devices (e.g. IDE, SATA or USB), and a graphics card
(typically in VGA mode). It also contains an textual or graphics interface that
allows a user to configure and tweak various aspects of the system and the
booting process.
As computers became more advanced, some of the limitations to BIOS style
firmware was forseen, and a new firmware specification, UEFI (Unified Extensible
Firmware Interface), was developed by an industrial consortium to handle the
booting needs of the next generation of computers.
One of the key differences between UEFI and BIOS firmware is in flexibility.
Systems based on BIOS firmware are traditionally thought of as rigid, whereas
UEFI systems are very flexible. For example, BIOS firmware is file system
unaware, and always looks for the bootloader (boot code) in the first sector
(512 bytes) of the first disk in its list of boot priorities. This 512 byte
sector is usually referred to as the MBR (Master Boot Record). Although boot
priorities can be modified, the 512 byte limitation is hard coded into the
BIOS.
This limitation may have been sufficient for single boot OSs of old, but a
modern PC may require dual-boot or multi-boot configurations and more advanced
boot features (such as memory tests, recovery, etc.).
To overcome this limitation the larger bootloader was split up into sections,
the first of which resided on the first sector of the disk, and basically
pointed to where the next section resides. Usually (by convention) some space
on the disk was left empty between the MBR and the first partition. As such
the second stage of the bootloader could not be placed there. For more
advanced bootloaders such as GRUB even this wasn't sufficient, and a third stage was
required. It resided in the same file system as one the installed OSs.
In contrast, UEFI specifies that UEFI compliant firmware will be aware of at
least the FAT file system, although UEFI firmware may be aware of other types of
file systems.
For a colorful and comprehensive guide to UEFI see this webpage.
Also see
* Archlinux Wiki page on UEFI.
* here.
* here.
* section below on Disks and partitioning.
------------
| efibootmgr |
------------
This utility provides a way of creating and/or reconfiguring UEFI boot entries.
The utility provides a consitent interface between the user and the hardware's
firmware. This is particulary useful, as there is no uniform and consistent
implementation of UEFI accross hardware.
To use this utility, the system must have been launched using UEFI.
To list all EFI boot entries, invoke the command without arguments
$ efibootmgr
If the system was not launched using UEFI, then the message "EFI variables are
not supported on this system" will be given.
Add the -v option to obtain more verbose output.
To create an EFI boot entry use to -c option. For example:
$ efibootmgr -c -d /dev/sda -p 2 -L "MyOS" -l '/EFI/Grub/grubx64.efi'
The -d option is for device (in this example its the first harddrive)
The -L option specifies the boot entry label (i.e. the way it is presented in
the boot menu.)
The -l option specifies the EFI executable to launch.
Note, the executable path is specified using a forward slash, because the EFI
partition is FAT and uses DOS nomeclature.
Note, within the OS the EFI file system should normally be mounted on \boot\efi.
The installer usually takes care of that.
To modify a boot entry
$ efibootmgr -b entry# ...
For example to change the label of boot entry 0002 to "LinuxOS"
$ efibootmgr -b 2 -L "LinuxOS"
To delete boot entry 0002
$ efibootmgr -b 2 -B
The -B option is for delete.
See here for a good description of this utility along with examples.
Bootloader
********************************************************************************
* - Bootloader -
********************************************************************************
A computer's BIOS or UEFI booting mechanism doesn't load the operating system
directly. Rather it loads a bootloader into memory and executes the
bootloader code. The bootloader in turn knows how to launch the OS.
The bootloader can be a simple program which can launch only one type of OS
(typically the case with Windows bootloaders).
More sophisticated bootloaders such as LILO and the more advanced GRUB, can
launch different types of OS's (Linux, Windows, FreeBSD, etc.). This section
focuses on GRUB which is the bootloader of choice amongst Unix-like and Linux
OSs.
------
| GRUB |
------
Grub is the GNU bootloader utility (LILO was the original bootloader used
in Linux Distributions). The original Grub is now named Grub Legacy, and
has been superceded by Grub 2 (version 1.9 and up), although the original
Grub can still be found in some systems.
The Master Boot Record (MBR) is a 512 byte sector the contains the first machine
instructions executed by the computer after the BIOS completes executing.
It also contains the partition table for a disk utilizing the BIOS type
partition scheme (refer to section on disks and partitioning for additional
background on the MBR partition table).
Grub works by installing the initial boot program in this sector.
The program placed in this sector is called stage 1, and being limited to only
512 bytes less the 64 reserved for the BIOS partition table, it is basically
responsible for identifying where on the disk the successive stage (stage 1.5
or stage 2) programs reside, and launching the next stage. For further details
on the mechanism of the different stages refer to this Wikipedia article.
The idea behind Grub is to provide a powerful user shell interface to execute
various tasks associated with the boot process. It is file system aware, and
can therefore launch a kernel and initram image directly from the file system
(rather than keep a record of sectors where they reside, as more simple
bootloaders do). Furthermore, being file system aware allows much of Grub's
drivers and code to be conveniently placed in the /boot directory.
In case the MBR ever gets corrupted and you need to start from scratch
then it can be erased as follows:
$ dd if=/dev/zero of=targetdevice bs=512 count=1
The arguments provided are:
* /dev/zero
This is a Unix device which simply streams out zeros. It is good for nulling.
* targetdevice
This is the drive for which you wish to null its MBR (e.g. /dev/sdf).
* bs=512
Block size is 512 bytes, meaning dd writes 512 bytes at a time.
* count=1
This means write only one set of 512 bytes.
This argument is crucial, otherwise dd will continuously write onto your disk
until it reaches the end, thus wiping out the entire disk!
Warning! dd must be used with utmost caution. A mistake in one of
the arguments can quietly and irreversibly wipe out portions of the disk you
did not intend to, or the disk in its entirety.
Grub can be invoked from an already running Linux system.
-------------------------
| Grub 0.97 (Grub Legacy) |
-------------------------
Here we assume that (hd0) refers to device /dev/sda.
Note, this is not the always case, so check first!
To set up (install) grub in the MBR of device /dev/sda with a boot directory
in /dev/sda1, issue the following commands within the grub shell.
$ root (hd0,0) # location of boot images
$ setup (hd0) # hd0 is the destination of the installation of stage 1
To set up grub in the boot sector of /dev/sda1 with a boot directory in
/dev/sda1:
$ root (hd0,0)
$ setup (hd0,0)
To set up grub in the boot sector of /dev/sda1 with a boot directory in
/dev/sda2:
$ root (hd0,0)
$ setup (hd0,1)
Note, in the latter two cases Grub will not be automatically booted, since
stage 1 is not in the MBR. This Grub installation, though, may still be chain
loaded from a different boot disk. In any case, the latter two examples were
more for illustration purposes than practicality.
--------------------------------
| Grub 2 - with traditional BIOS |
--------------------------------
Refer to the previous subsection for an introduction to MBR, and about the
Grub booting mechanism.
The configuration file for grub2 is called grub.cfg.
For a traditional BIOS setup it resides in /boot/grub2/grub.cfg.
(For a UEFI disk see relevant section below.)
You can edit this file to add boot entries.
Some important grub2 utilities:
* grub2-install
Copies GRUB images into /boot/grub2, and uses grub-setup to install grub
into the boot sector
Note, it might be necessary to supply your own grub.cfg file in the
/boot/grub2 directory.
* grub2-setup
Use this utility to set up a Grub configuration given that grub2 files are
already installed. For example, from within a Unix shell, issue the command
$ /sbin/grub2-setup -d /boot/grub --root-device="(hd0,msdos1)" /dev/sda
* The arugment --root-device="(hd0,msdos1)" tells grub-setup where the
/boot partition is. In the case of the example it's telling it to look
at the first disk (hd0), and the first MBR style partition (msdos1) on
that disk. For Linux this would usually correspond to /dev/sda1 (although
not necessarily.)
* The arugment /dev/sda tells grub2-setup to place stage 1 code in the MBR
of disk /dev/sda.
Example
-------
The following is an example where you installed a file system, but without a
boot loader.
If the /boot partition of the target disk doesn't have grub yet installed, then
mount the boot partition and run grub2-install to install grub2
$ mount /dev/target_boot_device /mnt (e.g. /dev/sdb1)
$ grub2-install --boot-directory=/mnt /dev/target_mbr_device (e.g. /dev/sdb)
Use grub2-setup to setup a grub configuration given that grub2 files have
already been installed (i.e. /boot/grub directory exists and has relevant
files but nothing in the MBR).
$ grub2-setup -d /boot/grub -r "(hd1,msdos1)" /dev/sdb
If you have two usb disks and their boot order is unpredictable,
you can install grub on both disks and have both grubs reference the
same boot directory.
$ grub-install --directory bootdir /dev/firstdevice
$ grub-install --directory bootdir /dev/seconddevice
Most likely grub is already installed on the first device, so only
the second line is necessary. "bootdir" is just where your grub files
are installed. Mount partition containing bootdir if necessary.
For example
$ grub-install --directory /boot /dev/sdb
will install the MBR on the first sector of /dev/sdb, and other code in
sectors further up.
It will look for grub files in directory /boot of whatever disk it was
defined where grub-install was run (e.g. /dev/sda1 the boot partition
of /dev/sda).
Troubleshooting
---------------
If a disk is not booting into Grub and you suspect Grub became corrupted,
start the computer with a rescue disk or live image.
First mount the target disk's boot directory onto /mnt
$ mount /dev/sda1 /mnt # Assuming /dev/sda1 is where your boot partition is
If /dev/sda is the disk on whose MBR you wish to place stage 1, then install
grub2 as follows
$ grub2-install --boot-directory /mnt /dev/sda #
Warning! You must replace /dev/sda and /dev/sda1 with that which is
applicable to your own system. By no means use the commands as they appear
here blindly, as that can corrupt your installation further. If you don't feel
you have a good grasp of using this command, then ask someone with experience
to fix it for you.
--------------------------
| GRUB2 with UEFI systems |
--------------------------
For backround on UEFI see Archlinux Wiki page on UEFI.
For a more colorful and comprehensive guide to UEFI see this webpage.
Also see here.
Also see section below on Disks and partitioning.
Some Useful tools for manipulating EFI systems:
* For listing and manipulating EFI variables, use efivar
* To manipulate UEFI firmware boot manager settings, use efibootmgr
* For manipulating UEFI secure boot platforms, use efitools
* UEFI shell
A UEFI shell is a shell for the firmware that allows launching EFI applications
amongst which are bootloaders.
It is useful for obtain various information about the system and firmware,
running diskpart, loading UEFI drivers, file editing (txt and hex) and more.
Some x86_64 UEFI firmware comes with the option of launching such a shell.
The shell must be named shellx64.efi
Archlinux has such a shell available (in AUR): uefi-shell-git
Refer to the following document for a complete description of this shell:
UEFI Shell
* Procedure for installing Grub2 bootmanager in an EFI system
1. Mount efi system partition. For example
$ mount /dev/sda1 /efi
In the example /dev/sda1 is the EFI partition, and /efi is the desired mount
point.
2. Choose a bootloader identifier, say "GRUB".
Thus, a directory /efi/GRUB will be created to store the GRUB EFI binary....
3. Invoke the following commands:
$ mount /dev/efipartition /efi # make sure you created /efi directory.
Note, replace "efipartition" with the actual EFI partition name (e.g. sda1)
$ grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB
This will install the EFI binary "grubx64.efi" and place it in directory
"/efi/EFI/GRUB"
Its modules will be stored in /boot/grub/x86_64-efi/
grub-install will also attempt to create a corresponding entry for launching
GRUB in the firmware boot manager (it calls the aforementioned efibootmgr
program to do so).
4. Create grub.cfg
Use the command grub-mkconfig -o /boot/grub/grub.cfg to generate grub's
main configuration file.
If doing this from an archlinux install media make sure you arch-chroot.
The Archlinux menu entry will be automatically added to grub.cfg.
For booting other systems you will need to manually add entries into grub.cfg
to reference those systems.
5. Issue the command (as root)
$ sudo efibootmgr
This will display the current settings.
For more on the settings see end of man page.
$ man efibootmgr
If you have Windows installed (on your dual boot system), very likely
Windows will be launched right away without a boot menu showing up.
You can change this using efibootmgr.
???If the boot manager menu does not appear when booting, it could be the
timeout parameter is set to zero. Try
$ efibootmgr -t 10
This will give you 10 seconds to decide which boot entry to select.
Note, if you get a blank screen, just wait the 10 seconds, and it will
start.
-------------------------------------------------------
| Example - Minimal boot option with GRUB and Archlinux |
-------------------------------------------------------
This following configuration file is minimalist. It doesn't provide you with
a menu. It merely launches the specified Linux kernel and initram image.
root = /dev/sda2 = afd0f78d-d7e4-470e-b44d-0cd9cb707217 = san64archroot
boot = /dev/sda3 = 196776d1-d11f-4b9e-b8fc-24b1533a403c = san64archboot
# Beginning of grub.cfg file
set root='hd0,gpt2'
insmod gzio
insmod part_gpt
insmod ext2
set root='hd0,gpt3' # /dev/sda3 boot part
linux /vmlinuz-linux root=LABEL= rw quiet
initrd /initramsfs-linux.img
# End of grub.cfg
When you boot your computer you will be put into a Grub shell.
To launch, issue the command
$ source /grub.cfg
-------------------------------------------
| Example - Setting up the /boot directory |
-------------------------------------------
The following commands are part of the procedure for configuring your boot
partition for an Archlinux installation.
$ mount /dev/sda2 /mnt # /dev/sda2 corresponds to the root partition
$ arch-chroot /mnt # Change system root to /mnt
$ mkinitcpio -p linux # Make an initram image
$ mount /boot /sda3 # /dev/sda3 correpsonds to the boot partition
# Let Grub generate a configuration file based on system configuration
$ grub2-mkconfig -o /boot/grub/grub.cfg
Note, substitute for sda2 and sda3 that which is relevant to your setup.
Network Boot
********************************************************************************
* - Network Boot -
********************************************************************************
Network booting is accomplished using the Preboot eXecution Environment (PXE).
For more info on PXE refer to this WikiPedia article.
For an open source implementation of PXE see ipxe.org.
To get an idea of how to use PXE, invoke the iPXE command and issue various
commands within the interactive shell to set up the network, followed by
obtaining the desired kernel (and initrd) images and launching the kernel.
The website ipxe.org gives an example of launching a demo live Linux system
from the network.
After setting up and examining the network connection the following command
is issued within the iPXE shell:
$ chain http://boot.ipxe.org/demo/boot.php
This launches the short script from the internet.
The contents of the script is:
#!ipxe
$ kernel vmlinuz-3.16.0-rc4 bootfile=http://boot.ipxe.org/demo/boot.php fastboot initrd=initrd.img
$ initrd initrd.img
$ boot
Partically speaking a Linux distribution that supports a network install
will have a current install script available on the web.
iPXE can be launched from grub or some other bootloader through chain loading.
iPXE can also be installed on a bootable CD or USBstick and launched from
there.
Boot Process
********************************************************************************
* - Boot Process -
********************************************************************************
After the bootloader loads the kernel and initrd image, the system goes
through various steps to get the system to the point where a user can login
and use the system. Amongst them is mounting file systems, loading drivers,
and starting services. The system also logs the boot process.
It is possible to configure Linux to display the steps taking place during
boot (with some installations this may be the default).
---------------
| Boot progress |
---------------
plymouth is a graphical boot system and logger.
With modern computer graphics cards, the kernel is able to provide a
graphical boot screen, of which plymouth takes advantage.
With older cards plymouth reverts to a simple progress bar.
Plymouth can also be configured to provide detailed boot output.
Pressing ESC during boot toggles between graphical boot and the detailed
output mode.
To configure boot screen preferences, issue the command
$ plymouth-set-default-theme [options]
See man page for plymouth for more detail.
Note, for graphical boot to take place, either "splash" or "rhgb" must be
provided as a command line option to the kernel. This is something that's
configured in grub.conf.
-------------------
| Maintainance Mode |
-------------------
When Linux encounters a problem booting it allows the administrator to log
in as root. This allows the administrator to fix the problem and reboot the
computer. Since the root file system is initially mounted in read-only mode
it will be necessary to remount it in read-write mode:
$ mount -o remount /
Once the root file system is remounted in read-write mode the problem may
be fixed.
Often the failure to boot is the result of a faulty entry in /etc/fstab,
in which case the entry may be corrected (or deleted if the entry is not
desired). Typing exit, continues the boot process.
Sometimes the fault may be with a damaged files system. This can be fixed
with the fsck command. Refer to this section and scroll down to subsection
titled "Repairing a damaged file system" for more.
Disks and Partitioning
********************************************************************************
* - Disks and Partitioning -
********************************************************************************
A physical disk may be divided into a few logical disks or areas known as
partitions.
The information on partition boundaries usually resides somewhere in the
begining of the disk, in what's called a partition table.
There are two main partitioning schemes:
* Master Boot Record (MBR)
This is the classic partitioning scheme, and has been around since 1983.
It started with the IBM PC and is still in use today.
This method of partitioning is limited to a disk of size 2TiB.
It allows for four main partitions.
If more partitions are required, the fourth partition can be made into an
extended partition, which is simply a partition that acts as a
container for additional partitions. Additional extended partitions can
be nested within each other.
For more on MBR refer to this Wikipedia article.
* GUID Partition Table (GPT)
GPT is part of the UEFI standard.
It was developed by Intel in the 1990s, and is intended to replace MBR.
It relieves alot of the constraints and limitations of the MBR partitioning
scheme. For instance it can work with a disk that has up to 2^64 sectors
(a sector is typically 512 bytes). The GPT partition table is far more
flexible than MBR's partition table, and allows for essentially unlimited
partitions.
The first usable sector on a 512 byte sector disk is sector 34.
For more refer to this Wikipedia article.
For a comparison of MBR and GPT see here, and here.
--------------
| Partitioning |
--------------
A number of tools are available to partition a storage device.
The following are terminal based utilities that can be used to display,
create and manipulate partitions:
* fdisk - An interactive terminal based partitioning tool.
* gdisk - Same as fdisk, except intended for GPT partitioning
* sfdisk - Partition table manipulator for Linux
* cfdisk - Curses based disk partition table manipulator for Linux
* parted - A powerful partition table manipulator with command line
or interactive modes.
* mpartition - Tool to partition an MSDOS hard disk
Graphical tools and frontends are available as well.
When installing Linux you will be asked whether you would like
* the installer to automatically partition and format your system, or
* customize your partitioning.
Under normal circumstances its fine to have the installer decide how to
partition your disk. However, if you have special requirements, such as
you wish to install a few OSs on your disk, or you wish to create a separate
partition for certain mount points such as /home, /var or /var/spool, then you
should opt for customized partitioning.
Note, in general, if you have existing partitions that you wish to preserve
on your disk when installing a new OS, then you can't assume the installer
will preserve them. Windows installers are likely to erase the disk before
installing (except for other Windows installations on the disk).
Linux installers usually respect existing partitions and other OSs present on
the disk. If you have another Linux or Windows installation on your disk, then
most installers will create a dual boot configuration. Unless you specify so,
such an installer will not erase an existing partition unless told to do so.
However, its not a good idea to rely on this. Make sure to backup your data
before proceeding to install your Linux on a disk with existing partitions.
For a Linux installation you will need the following partitions:
* boot partition - Contains all the boot stuff, such as Linux kernel,
initram image and bootloader stuff.
* root partition (/)
* home partition (/home) - Contains user directories. Although it is not
strictly necessary to have /home sit on a separate paritition, it can be
useful when replacing the system, and you want your home directories to
remain intact.
Optionally have a
* swap partition -- Although not mandatory, it is highly recommended to
create a swap partition.
The rule of thumb used to be alot twice as much storage space to the swap
partition as there is RAM. Nowadays, many computers come with alot of RAM,
so I am not sure this rule of thumb is still relevant.
Other partitions that may be useful but not necessary, are:
* usr partition (/usr)
* var partition (/var)
* opt partition (/opt)
Without separate partitions being designated, these directories will occupy
space on the root partition.
If setting up a root partition that is tight on space, it is advisable to
create separate \var and \usr partitions and perhaps \tmp partitions.
The reason for this is that these tend to be variable, and thus fill up the
root partition. A root partition that fills up completely is a problem.
The boot partition should be approximately 256 to 512 MiB.
The boot partition will contain the Linux kernel and initram image as well as
bootloader related files. Although normally the contents of the boot
partition doesn't occupy as much as space as suggested for the partition size,
having a larger partition leaves open the possibility for additional Linux
kernels and initram images for multi boot configurations.
With a GPT formatted drive, an additional two partitions will be necessary:
* Protective MBR -- Should be approximately 1MiB.
GPT unaware OSs and software will see a GPT disk as being empty, and
as such may decide to format the disk. The protective MBR is a BIOS style
partition that such an OS or software will see, signaling to them that the
disk is not empty, and should not be erased.
When using gdisk or some other GPT partitioning software select partition
type EF02.
* UEFI partition - Should be approximately 1GiB.
This is where UEFI files and drivers are stored.
UEFI is a new booting standard that is replacing the old BIOS boot.
See "GRUB2 with UEFI systems" subsection for more about it.
When using gdisk or some other GPT partitioning software select partition
type EF00.
If a new partition has created or modified and you wish the kernel to be made
aware of the changes use partprobe, or partx.
----------------------
| LVM - logical volume |
----------------------
One can create virtual partitions within a physical storage device
partition. This is useful if only one partition is available and one wants a
separate / /boot /var and swap partitions on that one physical partition.
LVM related Terminology:
Volume Group (VG) - an abstract enetity consisting of one or more disks
Physical Volume (PV) - a reference to a physical disk
Logical Volume (LV) - equivalent to old style partitions
Physical Extent (PE)
Logical Extent (LE)
Snapshot
LVM related Commands:
lvm - logical volume manager
vgscan - scan for all volume groups
vgdisplay - display volume group information
pvdisplay - display physical volume information
pvresize - resize a physical volume (partition)
lvscan - scan for all logical volumes
lvextend - resize (upward) a volume group
lvreduce - resize (downward) a volume group (may destroy data - resize filesystem first)
vgchange - to bring a volume group online
pvmove - To move an online logical volume between PVs on the same Volume Group
To mount a logical volume:
$ mount /dev/volumegroupname/logicalvolumename mountpoint
-----
| DOS |
-----
If installing DOS on the same harddrive, it will be necessary to first create
the DOS partition using the DOS partitioning program "fdisk" (not the Linux
fdisk). If Linux partitions already exist on the harddrive it may be necessary
to first remove them using cfdisk while in Linux, and then rebooting
with a DOS startup disk, and creating the DOS partition (since the DOS
fdisk does not know how to delete the Linux partitions).
------
| Misc |
------
Disk signature - each partition of a disk receives a disk signature.
Be careful when cloning partitions, as the signatures are duplicated,
something that will cause a *disk collision*.
Use the wipefs utility to change a disks signature.
Disk Encryption
********************************************************************************
* - Disk Encryption -
********************************************************************************
Disk encryption in Linux is on a per partition level.
Partition can be physical or logical.
The cryptsetup command is the main command used in dealing with encrypted
partitions.
(See man page for more details on this command)
--------
| Basics |
--------
The following procedure is for a physical partition (although it can be adapted
to logical partitions).
Sources:
* Linux hard disk encryption with luks cryptsetup command HOWTO
* How to encrypt a single partition in linux/
* man page for cryptsetup command
* Identify which partition you would like to encrypt.
$ lsblk
sdc 8:32 0 1.0T 0 disk
|
--sdc1 8:33 0 1.0T 0 part /home/storage
If target partition is mounted, then umount it
$ umount /dev/sdc1
* Format the partition - This sets up a luks container in the partition
$ cryptsetup luksFormat --type luks2 /dev/sdc1
You will be asked to provide a passphrase (do not forget the given passphrase)
(note: luks2 provides extended features. See man page for more)
* Open the partition using the passphrase
$ cryptsetup luksOpen /dev/sdc1 mybackup
"mybackup is the name to which the device will be mapped in device mapper.
Substitute a name that is descriptive of the purpose of your storage.
The mapped device will show up as:
/dev/mapper/mybackup
* Zero out the data on the partition (optional)
$ dd if=/dev/zero of=/dev/mapper/mybackup bs=1M status=progress
Doing this will enhance the security of your device.
By zeroing out the data in the unencrypted domain, you are causing the
partition to appear to have random data in the encrypted domain.
Therefore, it will be more difficult to use known patterns to
crack the encryption key.
* Create a file system on the encrypted partition
$ mkfs.ext4 /dev/mapper/mybackup
(Add -c option to check for bad sectors; add a second -c option to employ
a slower read/write check. Can also use fsck.ext4 -c /dev/mapper/mybackup
to check afterwards with the volume unmounted.)
Note, the gnome disks graphical utility can be used to format /dev/sdc1 as
desired. It is a good idea to specify "erase" of data when creating a new
partition. This is equivalent to zeroing out the data mentioned above. This
may take time, but will render the encryption more effective.
* To change a passphrase
$ cryptsetup luksChangeKey /dev/sdd1
You will be prompted for the passphrase to change.
You will then be prompted for the new passphrase to replace the old one.
For example:
$ cryptsetup luksChangeKey /dev/sdc1
* To manually mount the partition with the newly created file system.
The first step is to create a /dev/mapper entry (this was already done up
above, so no need to do this again for this example.)
$ cryptsetup luksOpen /dev/mydev mybackup # substitute for mydev e.g. sdc1
The second step is to mount the /dev/mapper device onto a mount point of choice:
$ mount /dev/mapper/mybackup /mnt
In this example "/mnt" is the directory onto which the partition will be
mounted.
* To umount and secure data
As long as the partition is open, it is accessible to those with permissions
to the disk. If you are no longer using the disk and want to make the disk
inacessible, do as follows:
$ umount /dev/mapper/mybackup
$ cryptsetup luksClose mybackup
The latter command removes the existing mapping "mybackup" and wipes the key
from kernel memory.
* Sample bash functions for mounting and unmounting encrypted volumes
# mounting mirrorbak
function mmirrorbak () {
if [ ! -e /dev/mapper/mirrorbak ]; then
sudo cryptsetup luksOpen /dev/sdc1 mirrorbak
echo "Opened mirrorbak partition"
fi
if mountpoint -q -- /home/mirrorbak; then
echo "/home/mirrorbak already mounted"
else
sudo mount /dev/mapper/mirrorbak /home/mirrorbak
echo "Mounting /home/mirrorbak"
fi
}
# unmounting mirrorbak
function umirrorbak () {
if mountpoint -q -- /home/mirrorbak; then
if sudo umount /home/mirrorbak; then
echo "Unmounted /home/mirrorbak"
if sudo cryptsetup luksClose mirrorbak; then
echo "Closed mirrorbak partition"
fi
fi
fi
}
The command mountpoint checks if a given directory is serving as a mount point
or not. Refer to the command's man page for more.
--------------------
| Automatic mounting |
--------------------
To setup the partition for automatic mounting follow the procedure outlined
below. The procedure is based on the web page Automount a luks encrypted
volume on system start.
First create a key for unlocking the volume.
$ mkdir /etc/luks-keys
Use 4KB of random data as your key
$ dd if=/dev/urandom of=/etc/luks-keys/secret_key bs=512 count=8
Note, this key needs to remain secret. It is therefore advisable to store it
on an encrypted partition (in the above example the key is stored in
/etc/luks-key, which resides in the the root partition. Therefore, the root
partition should be encrypted, otherwise someone with access to the root
partition would be able to access the key).
To add the key to those keys available for unlocking the volume use
$ cryptsetup -v luksAddKey /dev/sdb1 /etc/luks-keys/secret_key
To get information on your luks encrypted volume
$ cryptsetup luksDump /dev/sdc1 # substitute your own device for sdc1
grep "Key Slot" to see which of the eight key slots are enabled.
Key Slot 0: ENABLED
Key Slot 1: DISABLED
etc.
You can now unlock your luks encrypted volume using the newly created key
$ cryptsetup -v luksOpen /dev/sdc1 mybackup --key-file=/etc/luks-keys/secret_key
To automate mounting you first need to make an entry in /etc/crypttab (see
manual page for crypttab) corresponding to your encrypted volume.
The crypttab file specifies encrypted volumes that are to be mounted during
system boot.
In the example here this line would take the form:
mybackup UUID=f73... /etc/luks-keys/secret_key luks
- The first field is the device mapper name.
- The second field is the UUID of the volume (use cryptsetup luksDump /dev/sdc1 to
obtain it).
- The third field is the file containing the passphrase key.
- The fourth field is a list of options. In this case the encryption method.
Try verifying that the encrypted volume can be opened using the information
provided in crypttab.
$ cryptdisks_start mybackup
The next step is to specify in fstab how and where the encrypted volume should
be mounted.
In /etc/fstab enter a line:
/dev/mapper/mybackup /mount_point ext4 defaults 0 2
- The first field is the device name in its accessible form (i.e. nonencrypted)
- The second field is the mount point (e.g. /home/backup, /home/storage, etc.)
- The third field is the file system type
- The fourth field are options
- See fstab man page for meaning of fifth and sixth fields
You can test the working of the entry in fstab with
$ mount /mount_point
To test the whole thing, reboot the system.
--------
| Automatic mounting at boot |
----------------------------
The procedure outline here applies to the following scenario:
You added an encrypted volume sometime after having installed your OS.
Unlocking this volume requires the same password as unlocking your other
encrypted volumes that must be unlocked prior to booting (e.g. /root, /home).
You would like this added volume mounted along with the others.
The procedure is based on forum thread.
* Obtain the desired name. You may use gpart of lsblk
$ lsblk
Lets say its /dev/sdb1
* Get the UUID for the device using cryptsetup
$ sudo cryptsetup luksUUID /dev/sdb1
e9e2d59a-bf23-47c6-bd59-424e951098d7
* Generate a UUID to be used as the encrypted partition name in the /dev/mapper
directory
$ uuidgen
befbfc25-3159-4b19-87c5-6d40fa60e2ad
Alternatively, come up with a human readable name for your encrypted
partition.
* Edit /etc/crypttab and add a new line:
luks-befbfc25-3159-4b19-87c5-6d40fa60e2ad UUID=e9e2d59a-bf23-47c6-bd59-424e951098d7 none
If you selected a human readable name for your encrypted partition
replace luks-befbfc25-3159-4b19-87c5-6d40fa60e2ad with that name
* Edit /etc/fstab, and add the line:
/dev/mapper/luks-befbfc25-3159-4b19-87c5-6d40fa60e2ad /mount-point ext4 defaults 1 2
Note, you may add other options besides defaults.
If your file system type differs from ext4 substitute accordingly.
Here too, if you selected a human readable name for your encrypted partition
replace luks-befbfc25-3159-4b19-87c5-6d40fa60e2ad with that name.
Laptop
********************************************************************************
* - Laptop -
********************************************************************************
(This section is work in progress)
Functionality wise laptops typically differ from desktop computers mainly in
* Power and battery management features
* Built-in wireless networking and bluetooth capability
The latter is discussed in the Network Guide.
Note, because laptop hardware is more compact, proper fan control might be more
critical for a laptop than for a desktop computer under similiar loads.
See section on Fans and Sensors for more on computer fan control and
optimization.
An OS installed on a laptop usually has enabled by default certain power
management features.
For example, when the laptop lid is closed, the default behavior is usually to
suspend the OS.
Here are various suggestions for configuring this behavior:
* Generic method
$ vim /etc/systemd/logind.conf
add line "HandleLidSwitch=ignore"
Note, this method may be ineffective when employing a desktop manager (e.g.
Gnome), since the desktop manager often overrides Systemd's settings.
* If using an older version of Gnome, use the graphical utility
gnome-power-manager.
It may need to be installed from your distribution's repository.
Note, despite its name it does not require Gnome to be the active desktop.
* If using a current version of Gnome use the power panel in Gnome's
control panel (i.e. settings).
Some of installations of Gnome, may not include the lid setting in the
control panel.
* It is also possible to change Gnome settings by way of the command line.
For example, to disable suspend when closing the laptop lid issue the command
$ gsettings set org.gnome.settings-daemon.plugins.power lid-close-battery-action blank
Four arguments are provided to the gsettings command:
1. The first specifies the action. To modify a setting we specify set.
2. The second argument is the Schema. It is basically a specification
of where the setting resides in the settings hierarchy.
The setting for controlling the behavior of the laptop lid is under schema
org.gnome.settings-daemon.plugins.power
To list all installed schemas, issue
$ gsettings list-schemas
3. The third argument, known as Key, is the name of the setting you wish
to modify.
4. The fourth argument is the value you wish to apply to the setting.
For more about gsettings, read its man page
$ man gsettings
See also section GNOME Desktop.
Note, the key "lid-close-battery-action" may not be available in your
installation. To see which keys are available, issue
$ gsettings list-keys org.gnome.settings-daemon.plugins.power
* If all else fails, you can disable the sleep/suspend feature altogether
(although this means you will not be able to put your laptop in sleep mode at
all.)
$ sudo systemctl mask sleep.target suspend.target
Reboot the system. To verify status of sleep and suspend behavior, issue
the command
$ sudo systemctl status sleep.target suspend.target
The status of both should be inactive.
If you wish to reenable sleep/suspend capability, issue
$ sudo systemctl unmask sleep.target suspend.target
and reboot.
For more, see here.
Cloning
********************************************************************************
* - Cloning -
********************************************************************************
Cloning an operating system that is installed on either
* a virtual machine, or
* a physical computer
to a physical computer, under normal circumstances is not recommended practice.
In such cases it is recommended to install the system from scratch.
If, however, the need arises read on.
Note, the following methods for cloning an OS installation apply to cloning
from either a physical machine or a virtual machine to a physical machine.
However, if cloning a virtual machine to a virtual machine it is easier to
use the cloning feature of your virtualization software (e.g. VMWare,
Virtualbox). For instance in Virtual Box you can clone a drive using the
command:
$ VBoxManage clonemedium ...
Take note, there may be settings in the cloned system that would require
modification in order to distinguish it from the original. For instance
if the original system had a static IP address, the cloned system should
be configured to have a different IP address. Furthermore, an installer
may configure the initrd image and OS to work with the hardware on which
the system was installed on. Cloning the system onto other hardware may
result in the wrong drivers being loaded up.
--------------------------------------------------------
| Cloning a Linux/Unix installation Method I - using tar |
--------------------------------------------------------
The idea here is to make a copy of the / directory as well as the kernel and
initrd image and place them on a different drive (the drive containing the
clone system).
I choose the tar utility to make the copy of /, although rsync could
be used just the same.
In the following I adopt the following terminology
source partitions - disks/partitions containing OS to be cloned.
destination partitions - disks/partitions that will contain the cloned OS.
storage disk - disk to temporarily store copies of large files.
The steps are as follows:
Part I - Making copies of sources
* Prepare storage disk to temporarily store large files.
If cloning a virtual machine it could be a shared folder (if you have
guest additions or similar installed in the VM).
It could also be a samba share or NFS mount.
For the purpose of this tutorial I'll assume the temporary storage will be
mounted as /mnt/tmpstorage.
* Prepare at least two destination partitions for the cloned system.
1. A root partition - to be mounted later as /mnt/cloneroot
2. A boot partition - to be mounted later as /mnt/cloneboot
* Boot from a live CDROM OS with access to the source partitions and
destination partitions.
Note, I assume that from the live OS session you have access to the
source partitions as well as the destination partitions.
If not, you will have to boot from the live OS twice:
- At present, boot on the machine with access to the source partitions
- In the second stage, boot on the machine with access to the destination
partitions
* Create file systems on destination partitions and label them.
* Create ext4 file system on destination root device
$ mkfs -t ext4 /dev/target_root_device
$ e2fsck -c /dev/target_root_device
$ e2label /dev/target_root_device rootpart
* Create ext4 file system on destination boot device
$ mkfs -t ext4 /dev/target_boot_device
$ e2fsck -c /dev/target_boot_device
$ e2label /dev/target_root_device bootpart
* Create a few directories within /mnt to mount various file systems
$ mkdir /mnt/tmpstorage
$ mkdir /mnt/rootsrc
$ mkdir /mnt/bootsrc
If the destination partitions are accessible on the current machine, then
$ mkdir /mnt/cloneroot
$ mkdir /mnt/cloneboot
otherwise wait till part II to do this.
* Mount the root and boot file systems from the source partitions
* Mount the root file system of the OS to be cloned
$ mount /dev/src_root_device /mnt/rootsrc
* Mount the boot file system of the OS to be cloned
$ mount /dev/src_boot_device /mnt/bootsrc
* Mount temporary storage
$ mount /dev/tmpstorage_device /mnt/tmpstorage
Note, technically if both the source and destination partitions are
accessible on the live OS then there is no need for tmpstorage.
However, I include this here, because the cloned system might reside on a
second computer, in which case an intermediate storage device is needed.
* Tar the root file system onto temporary storage
$ cd /mnt/rootsrc
$ tar -c -l --preserve-permissions --same-owner --atime-preserve --numeric-owner -f /mnt/tmpstorage/rootfs.tar .
(note, --preserve-order option not being used)
* Copy vmlinuz (kernel) and initrd image onto temporary storage
$ cp /mnt/bootsrc/vmlinuz_ver /mnt/tmpstorage/
$ cp /mnt/bootsrc/initrd_ver.img /mnt/tmpstorage/
(Substitute correct names of your linux kernel and initrd image)
* Copy grub.cfg into tmpstorage:
$ cp /mnt/bootsrc/grub2/grub.cfg /mnt/tmpstorage
Part II - Transferring copies to destination
* If the destination partitions are not accessible on the current machine
(e.g. destination partitions lie on a different physical machine) then
shut it down, and boot from the live CDROM OS on the target machine, and
$ mkdir /mnt/cloneroot
$ mkdir /mnt/cloneboot
$ mkdir /mnt/tmpstorage
* If not already mounted, mount the root and boot destination partitions and
temporary storage:
* Mount the root file system of clone
$ mount /dev/target_root_device /mnt/cloneroot
* Mount the boot file system of clone
$ mount /dev/target_boot_device /mnt/cloneboot
* Mount tmpstorage
$ mount /dev/tmpstorage_device /mnt/tmpstorage
* Untar rootfs.tar onto /mnt/cloneroot
$ cd /mnt/cloneroot
$ tar -x -l --preserve-permissions --same-owner --atime-preserve --numeric-owner -f /mnt/tmpstorage/rootfs.tar .
(note, --preserve-order option not being used)
* Edit /mnt/cloneroot/etc/fstab and modify UUID/label references appropriately
* Force SELinux to relabel root file system
$ touch /mnt/cloneroot/.autorelabel
* If cloning from a virtual machine to a physical machine, and it has an
xorg.conf file, then remove or rename the virtual machine's xorg.conf
$ mv /mnt/cloneroot/etc/X11/xorg.conf /mnt/cloneroot/etc/X11/noxorg.conf
The reason for this is that xorg.conf specifies to use the virtual machine's
video driver, which is not usable by your physical machine.
Note, most modern Linux installations do not use a static xorg.conf, rather
they probe the hardware and generate dynamic settings and load necessary
drivers.
Part III - Boot stuff
* Two possible scenarios:
* Destination /boot partition doesn't have grub installed yet
Mount boot partition and run grub-install (formerly grub2-install) to
install grub2
$ grub-install --boot-directory=/mnt/cloneboot /dev/target_mbr_device # e.g. /dev/sdd
Normally you would like to place grub stage 1 on the MBR of the same disk
in which the /boot partition resides. In that case target_mbr_device
should be the same as target_boot_device.
* The machine onto which you are cloning has another OS installed
In this case you are looking for a dual boot configuration, in which case
do not install another grub2, rather just modify grub.cfg as
described in a later step.
* Make or copy over grub.cfg file
$ cp /mnt/tmpstorage/grub.cfg /mnt/cloneboot/grub2
* Place vmlinuz and initrd in target disk's boot partition
$ cp /home/tmpstorage/vmlinuz_ver /mnt/cloneboot
$ cp /home/tmpstorage/initrd_ver.img /mnt/cloneboot
* Edit grub.cfg (/mnt/cloneboot/grub2/grub.cfg) and add entry for cloned system
correctly referencing the new target kernel and initrd image as well as change
of UUIDs and labels.
$ vim /mnt/cloneboot/grub2/grub.cfg
* If cloning an Archlinux system you will need to reinstall "linux" package as
follows:
Boot from an archlinux CDROM install image (e.g. on a USB dongle).
Mount cloned root file system to /mnt
$ mount -t ext4 /dev/target_root_device /mnt
Then issue the following commands
$ arch-chroot /mnt
$ pacman -R linux
$ pacman -S linux
Alternatively recreate initrd image (if you know how to).
--------------------------------------------------------
| Cloning a Linux/Unix installation Method II - using dd |
--------------------------------------------------------
Note, this method of cloning an OS is not recommended, since the utility dd
being used to copy one partition to another is not aware of bad blocks,
which could lead to copying from one partition onto bad blocks of another.
* Create boot partition if doesn't exist, and copy over Linux kernel and initrd
files as described in the method I.
* Create identically sized partition on destination drive using fdisk or gdisk.
Might need to run partprobe.
* Use dd to duplicate partition content
$ dd if=/dev/sdXY of=/dev/sdWZ bs=4096
where /dev/sdXY is source partition, /dev/sdWZ is destination partition.
Note, if there are any bad blocks on the destination drive, then information
(possibly critical) destined for the bad blocks would be missing from the
clone.
* Create labels and UUID for any new partitions, including a change of
label and UUID for the duplicated partition.
(Some useful tools are e2label, uuidgen, tune2fs, dumpe2fs)
* Change entries in /etc/fstab of the newly duplicated OS to match appropriate
labels and UUID's.
Also include nofail option with partitions that you would like to load
at boot time, but not cause failure if they are unavailable (e.g. a home
directory that sits on an external drive.)
* Change owner of user directories to that user
$ chmod user.user /home/user
* Run grub2-install to install grub2 and make grub.cfg file
* If OS uses SELinux, then cause SELinux to relabel root file system
$ touch /.autorelabel
For more details on some of the steps see method I.
--------------------------------------------
| To clone MS-Windows into a virtual machine |
--------------------------------------------
Note: MS-Windows is proprietary software owned by Microsoft Corporation
and is licensed software.
If the object of the cloning is to end up with two or more Windows
installations without sufficient licenses, this is against Microsoft's
license agreement, and is likely against the law in most countries.
If the object of the cloning is to virtualize an existing Windows installation,
whereby the old Windows is discarded, refer to MS-Microsoft' EULA agreement
for the version of Windows you wish to clone at User Terms to determine
if that is a valid action under their agreement.
Method I: (See disk2vhd)
* Can use Disk2VHD application to create a Windows backup image.
It will be created as a .VHD file.
* Once that has been done, create a Windows VM using virtualbox and
point the harddrive to the .VHD file.
* It might also be necessary to change the HD controller from SATA to IDE in the
virtual machine settings.
Method II: (see Migrate Windows)
* Obtain a copy of MergeIDE (MergeIDE)
* Run it in Windows.
Earlier versions of Windows would normally refuse to boot if during the boot
process they detect that the harddrive controller ID is different than the
one present when Windows was installed.
This program makes a change in the registry telling Windows to ignore the
harddrive controller ID as a condition for booting.
* Reboot from a live CDROM image.
* Copy entire drive onto a file using dd
e.g.
$ dd if=/dev/sda of=$HOME/clone/hdclone.raw bs=4096
* At this point you can convert the raw harddrive image to a virtual machine
storage device format such as vdx (VMWare) or vdi (Virtualbox) or qcow2
(qemu).
Scanner
********************************************************************************
* - Scanner -
********************************************************************************
In Linux, scanner access is provided by the SANE (Scanner Access Now Easy)
interface. Besides scanners, SANE supports digital cameras. Refer to SANE's
manpage for more
$ man sane
SANE provides many backend interfaces, which are essentially drivers for
different scanner hardware. The backends are bundled into the package
sane-backends.
See the "BACKENDS FOR SCANNERS" section of the SANE man page for a list of
available backends and a short description of each.
Each backend also has its own man page. The man page specifies the chipsets
supported by the backend and a list of applicable scanner and vendor models.
It also lists and describes the options available with the backend.
For example my HP scanjet 2200c uses the plustek backend of sane.
To bring up its man page issue
$ man sane-plustek
SANE also comes with various front end utilities, which provide the user with
a way of interacting with the scanner hardware in a familiar and consistent
manner (i.e. using the same software interface to control different scanner
devices). The sane-frontends packages includes a number of programs:
* scanimage
This is a command line utility for scanning images. It comes with numerous
options. See below for basic usage. For complete details refer to its
manpage.
* xsane
A graphical scanner frontend to SANE.
* xcam
A graphical camera frontend to SANE.
* scanadf
A commandline utility for scanning multi-page documents. It is intended to
be used with ADF (automatic document feeder) capable scanners.
---------------------
| Scanning a document |
---------------------
Note, Steps 1 and 2 can be skipped if you already have permissions to the
scanner hardware (e.g. your installation permits user access to scanners, or
you belong to an administrative group such as wheel).
Once the scanner is plugged in:
1. Look for a USB scanner
$ lsusb
Identify the scanner device from the list of USB devices. For example:
Bus 001 Device 004: ID 03f0:0605 Hewlett-Packard ScanJet 2200c
The important information here is the bus directory (i.e. 001) and device
(i.e. 004).
2. Change permissions. For example
$ chmod a+rw /dev/bus/usb/001/004
3. To list available scanners
$ scanimage -L
4. Run xsane locally
$ xsane
Use the graphical interface to specify settings and options (e.g. borders,
resolution, output format, etc.).
In the xsane window, specify the output file name, and press the "Scan"
button to scan the page.
Xsane also provides a preview window. Obtain a low resolution preview by
pressing the "Acquire Preview" button.
5. You can also scan an image via the command line with
$ scanimage > file.pnm
Some options that come with scanimage are:
-T Test scanner backend
-p Print progress percentage
--format=tiff Set format to a certain type (defaults to pnm)
-mode lineart|gray|color Scanning mode
-x 100 Scan 100mm in horiztonal direction
-y 100 Scan 100mm in vertical direction
--resolution 200 Sets scanning resolution to 200dpi (default 50)
--lamp-switch=yes Manually switch on lamp (default no)
--lamp-off-at-exit=yes After scanning page turn off lamp (default no)
--warmup-time -1..999 Time to give the scanner to warm up lamp
--help A list of all options
Example:
$ scanimage -x 210 -y 290 --resolution 200 -p --mode Gray > file.pnm
If the option "--lamp-off-at-exit=no", the lamp will remain on after the
page was scanned. To manually turn off the lamp, issue
$ scanimage -n
Wacom Tablets
********************************************************************************
* - Wacom Tablets -
********************************************************************************
Wacom is a Japanese company that specializes in tablet and stylus products.
They offer a wide range of technologies, ranging from simple tablets with a
stylus to sophisticated self contained computers for professional artists and
graphics designers. Wacom tablets are popular for annotation and artwork.
The Linux Wacom Project provides Linux with Wacom drivers and configuration
utilities. This section discusses Linux driver installation for Wacom's line of
tablets, as well as configuration and usage.
The ArchLinux Wiki has an excellent article on installing and configuring
Wacom tablets.
--------------
| Installation |
--------------
In Archlinux the input-wacom driver is installed by default. This may be
sufficient to handle your wacom tablet. Although you should note, the standard
Xorg driver doesn't give the same level of smoothness nor access to all
features offered by the tablet.
If your tablet doesn't work with the included driver or you require access to
the more advanced features of the tablet, you will need the more specialized
and upto date driver package: xf86-input-wacom (for installation see
section below)
If installing manually, do as follows:
# Change into the download directory where you downloaded the wacom modules
$ cd $HOME/Downloads/wacom/input-wacom-0.27.0
# Copy wacom modules to appropriate destination
$ cp 3.7/wacom.ko /lib/modules/`uname -r`/kernel/drivers/input/tablet
$ cp 3.7/wacom_w8001.ko /lib/modules/`uname -r`/kernel/drivers/input/touchscreen
# Remove old modules
$ modprobe -r wacom
$ modprobe -r wacom_w8001
# Install new modules
$ insmod /lib/modules/`uname -r`/kernel/drivers/input/tablet/wacom.ko
$ insmod /lib/modules/`uname -r`/kernel/drivers/input/touchscreen/wacom_w8001.ko
-----------------------------
| Installing xf86-wacom-input |
-----------------------------
Note, this part is not necessary for the simple operation of the tablet.
To install xf86-wacom-input (the x-driver portion):
Fetch the latest tar-ball from source-forge
The required dependencies for xf86-input-wacom are:
xorg-x11-server-devel
xorg-x11-util-macros
libXi-devel
libXrandr-devel
libXext-devel
libX11-devel
libXinerama-devel
libudev-devel
Install those before hand.
---------------
| Configuration |
---------------
To set various settings and features of the tablet use "xsetwacom".
To list devices:
xsetwacom --list devices
For specific configuration needs see xsetwacom man page, as well as the above
cited ArchWiki article.
Gnome settings has an entry for Wacom tablets which you can use to tweak your
tablet and stylus.
KDE likely comes with a similar configuration utility.
------------
| Usage Tips |
------------
In xournal, make sure xinput is checked in order to use wacom as a
tablet rather than a mouse. Other applications such as Gimp may have a
similar option, although I have not checked.
Note, using the tablet as a mouse forgoes the stylus sensitivity feature, as
well as the high resolution afforded by the tablet. Strokes will have a jagged
or serated look to them.
Fan and Sensors
********************************************************************************
* - Fans and Sensors -
********************************************************************************
A desktop computer normally has one fan located directly on top of its main
processor and one or more fans located at strategic points in the chassis.
The processor fan directs the heat away from the processor and motherboard.
The chasis fan or fans directs the heat out of the chasis.
A standard laptop usually comes with one fan installed near one of the sides.
A liquid filled copper conduit conducts the heat from the processor towards the
fan. The fan blows on the conduit cooling down the liquid, and the liquid in
turn is circulated back to the processor.
A graphics card will often have its own fan as well.
Regulating the temperature of the computer's processor and other critical
components is normally accomplished with these fans. Normally, the computer's
BIOS and/or operating system control the operation, and in particular, the
speed of these fans. It makes use of temperature data collected from
strategically located sensors. As the load on the processor increases the
sensors will register increased temperature. The fan controller software will
in turn increase the fan speed causing the temperature to climb down again.
Under normal circumstances no user intervention is required in the operation
of the fan(s). However, in certain circumstances, such as
* a fault in the BIOS or operating system's management of the fan, or
* a need to optimize the operation of the fans
it may be necessary to use alternative software to activate or fine tune the
operation of the fan(s).
The following is a (partial) list of fan and sensor monitor/control software
for Linux.
* lm_sensors - Package that provides control and monitor of sensors
* xsensors/ksensors - x11/KDE interface to lm_sensors
* thinkfan - Fan control application (thinkfan.sourceforge.net)
Refer to the following Archwiki article for a more thorough treatment of this topic.
------------
| LM_Sensors |
------------
To read the systems's sensors, invoke sensors command:
$ sensors
The output will include the temperatures, voltages and fan speeds of various
components.
To set temperature limits, invoke
$ sensors -s
This will parse the configuration files /etc/sensors.conf and /etc/sensors3.conf
and set the limits as specified there.
Refer to the man page of the configuration file for more
$ man sensors.conf
For more about the sensors command refer to its man page
$ man sensors
X Windows
********************************************************************************
* - X Window System -
********************************************************************************
--------------
| Introduction |
--------------
The X Window System is a Graphical windowing system found in many Unix like OSs.
In referring to the X Window System you will also encounter the terms X Windows,
XOrg, X11 or simply X.
For additional background on X Windows (beyond what is found here) refer to the
Wikipedia articles on Xorg and X Window System.
For more technical detail on installation and configuration refer to this
Archwiki article.
---------------------
| Client Server Model |
---------------------
The X window system implements a client server model, whereby the applications
are clients and the terminal or station rendering the graphics output is the
server.
---------- ------------------
| Terminal |_______| Display Terminal |
| (Client) | | (X Server) |
---------- ------------------
Applications can send rendering requests to an X server running on the
same machine, or to an X server running on a remote machine (accessible on the
network or internet). Regardless, X works over the network in a manner that is
transparent to the user.
The following is an illustration of running X over the network.
Bob would like to run a computationaly intensive mathematical simulation on
an application called Octave on his powerful computer at work, from the
convenience of his home. His home computer has IP address 2.2.2.2, while his
computer at work has IP address 1.1.1.1
Bob does as follows. He logs into his computer at work using a terminal.
(home)$ ssh -X 1.1.1.1
He would like the computer at work to run Octave, but have Octave's graphical
interface show up on his display at home. So in the terminal that's now
connected to his work computer via ssh he types
(work)$ DISPLAY=2.2.2.2:0
This causes his work computer to send all future X requests to his home
computer. However, for security reasons he must inform his local computer to
accept X connections and calls from address 1.1.1.1 (i.e. his work computer.)
So from a local terminal he types
(home)$ xhost +1.1.1.1
This command adds Bob's machine at work to the list of authorized computers
(computers allowed to use the X server running on Bob's home machine).
He then launches Octave from the terminal connected to his work computer
(work)$ octave
Octave's graphical interface pops up on his display at home. He can then
interact with Octave using his keyboard and mouse as though Octave were
running on his home computer. Any simulation results and plots will display
on his screen at home.
Even though everything shows up on Bob's screen at home, it is Bob's work
computer that's doing all the number crunching. So Bob benefits from the
powerful processing capabilities of his computer at work, and yet is able to
display the results on his home computer.
On a technical level this is what happens:
Octave which is running on Bob's work computer sends Bob's home computer
graphical renendering instructions. The X server on Bob's home computer uses
these instructions to display windows and graphics on Bob's screen at home.
Octave is the X client and and the software running on Bob's home
computer accepting these graphical instructions is the X server.
For more about X, xinit and startx see man page
$ man X
$ man xinit
$ man startx
------------------
| DISPLAY variable |
------------------
In order for a client to render windows/graphics on the display terminal the
DISPLAY variable must first be set. If you are starting a window or desktop
session this variable will be automatically set for all open terminals to point
to the local display.
The DISPLAY variable is a string composed of three fields:
Hostname:Displaynumber.Screennumber
* Hostname is the network name or IP address. If left blank the computer
running the X server is assumed.
* Displaynumber is the monitor (or set of monitors in a multihead configuration)
associated with a keyboard and mouse. This is a mandatory field. For single
user systems it is set to 0. On multi-user systems multiple displays may be
available for different users, in which case each user will have his own
display (0, 1, etc.).
* Screennumber is relevant in a multihead configuration where each monitor
is its own work environment. That is, one screen is not a continuation of
another screen, but rather an independent work space.
Note, a multihead configuration whereby the monitors act as a single larger
logical monitor is considered only one screen.
The ":" separates between the network address/name field (on its left) and the
display/screen fields (on its right.)
A few examples of setting the DISPLAY variable.
$ DISPLAY=:0
$ DISPLAY=:1.5 # Local computer, Display 1, Screen 5
$ DISPLAY=192.168.25.3:0
In the first two examples the network field is left blank. This tells X to
display windows and graphics on the local machine's display.
In the first of the examples graphical output will be sent to display 0,
screen 0. In the second example the graphical output will be sent to display 1,
screen 5.
In the third example X is told to display windows on the machine having IP
address 192.168.25.3, display 0, screen 0.
An example of displaying an xterm window on the local display would
involve the following steps.
$ DISPLAY=:0; export DISPLAY
$ xterm
The xterm program will be launched and its window displayed on the local
display.
Normally setting the display is not necessary as the desktop session or
startx command will set that up.
If you are logged into a text based terminal (say remotely using ssh) and you
would like to run graphical applications on you own graphical terminal then you
need to set this variable manually.
---------------
| Authorization |
---------------
It was mentioned earlier that for security reasons X requires authorization
to accept X requests. If not for this, anybody could pop up windows and
graphics on each other's X terminal. They could perhaps even fool a user into
providing them with their password, by, say, running a screen lock program
prompting for a password to release the screen (since many systems lock the
screen by default, the user is unlikely to suspect foul play.)
X has three ways of authorizing access to clients:
* Access by host
This method was used in the example above.
To allow anyone access issue the command in a terminal where your X server
is running
$ xhost +
To allow specific machines access, issue
$ xhost +IP|name
where an IP address or hostname is provided.
For more usage options read the man page
$ man xhost
* Cookie access
This method uses cookies (pieces of arbitrary data) that are shared between
server and client. A client authenticates itself with a cookie, which the
server accepts as evidence that the client is authorized to display windows
and graphics on its display terminal. Cookies are stored in the .Xauthority
file. This file is to be found in the user's home directory, and is only
readable to himself. The xauth utility is used to work with this
access method. Read its man page for more
$ man xauth
To enable/disable xauthority in X Windows, edit the file startx (usually
in /usr/bin) and look for a line
enable_xauth=...
Set to 0 to disable, and 1 to enable.
Note, if you upgrade your system, startx will be overwritten and you will
have to make the change again.
* User access
(I am unfamiliar with this access method.)
------------------------------------
| X Server configuration - xorg.conf |
------------------------------------
It used to be that an X window system required a configuration file known as
xorg.conf to start up. This file told X all it needed to know about the
hardware that it needs to interface with, which includes mainly graphics
card(s), monitor(s), mouse and keyboard. As hardware devices (and X) became
more sophisticated it became possible for X to probe the hardware for all the
details it needs in order to interface with it without the need for the
this file. In fact, a modern X Windows installation will configure itself in
accordance with the hardware detected without an xorg.conf file at all.
However, the xorg.conf file may still necessary at times for bypassing X's auto
configuration mechanism in the event X is unable to correctly identify the
hardware or when interfacing with legacy hardware. To manually configure X,
create or generate an xorg.conf file and place it in the /etc/X11 directory.
When X starts up it will detect the file and use it to override auto
configured settings.
Nowadays ordinary users are unlikely to need to tinker with X's automatically
generated settings. But if the need arises I provide an example file with some
explanation. For more about the xorg.conf file and its structure refer to
this webpage.
# -- BEGIN of xorg.conf --
# General server layout section. References one or more screens and one or more
# of each input device such as mouse and keyboard. A simple configuration will
# have only one layout section. For configurations with more than one layout
# only one can be active at a time. The default layout can be specified in the
# ServerFlags section with the DefaultServerLayout option (see below).
# Each device referenced in the Layout section, must have a separate section
# devoted to it further in the file, specificying the details of that device.
Section "ServerLayout"
Identifier "My Only Layout" # Name of layout
Screen 0 "Screen0" 0 0 # First 0 is screen number.
# "Screen0" is identifier.
# 0 0 are upper left corner's coordinates.
# In multihead configuration can specify additional screens. For example
# Screen 1 "Screen1" RightOf "Screen0", places a second screen to the right
# of the first screen.
InputDevice "Mouse0" "CorePointer"
InputDevice "Keyboard0" "CoreKeyboard"
EndSection
# This section tells X where to find certains files. For this example an
# RGB (color) table and some font paths
Section "Files"
RgbPath "/usr/share/X11"
FontPath "/usr/share/X11/fonts/100dpi"
FontPath "/usr/share/X11/fonts/75dpi"
FontPath "/usr/share/X11/fonts/Type1"
FontPath "/usr/share/X11/fonts/misc"
EndSection
# This section tells X which modules to load.
Section "Module"
Load "dbe"
Load "extmod"
Load "fbdevhw"
Load "glx"
Load "record"
Load "freetype"
Load "type1"
Load "dri"
EndSection
# Various server settings go here.
Section "ServerFlags"
# The following option is useful to tell X which is the default layout
# (its not really necessary here, since only one layout was defined)
Option "DefaultServerLayout" "My Only Layout"
# The following option tells X to start even without a mouse
Option "AllowMouseOpenFail" "yes"
EndSection
# From here on the various devices specified in the "Server Layout" section
# are defined and configured.
# Keyboard device configuration
Section "InputDevice"
Identifier "Keyboard0" # Identifier used to reference device
Driver "kbd"
Option "XkbModel" "pc105"
Option "XkbLayout" "us,gr" # load up US and GR (greek) keyboard layouts
# Specify that Alt-Shift keys will be used to toggle between the two layouts
Option "XKbOptions" "grp:alt_shift_toggle"
EndSection
# Mouse device configuration
Section "InputDevice"
Identifier "Mouse0" # Identifier used to reference device
Driver "mouse"
Option "Protocol" "IMPS/2"
option "device" "/dev/input/mice"
Option "ZAxisMapping" "4 5"
# The following option tells X that simultaneously pressing the left and right
# mouse buttons should be treated as though the middle button was pressed.
Option "Emulate3Buttons" "yes"
EndSection
# This section tells X what it needs to know about the monitor.
# It is referenced below in Section "Screen".
# For a setup with multiple monitors a section for each monitor is necessary.
Section "Monitor"
Identifier "MySGIflatpanel" # Identifier used to reference monitor
VendorName "SGI" # An optional entry specifying manufacturer
ModelName "SGI 1600SW FlatPanel" # An optional entry specifying model
# Specify the range of horizontal sync frequencies supported by the monitor.
# Default units are kHz. If not specified defaults to 28-33kHz
HorizSync 31.5 - 121.0
# Specify the range of supported vertical refresh frequencies. Can also
# specify discrete values. Default units are Hz. If omitted defaults to
# 43-72 Hz.
VertRefresh 60.0 - 150.0
Option "dpms" true # Enable DPMS extension for power management of screen
# The following is a compact way of setting the video mode(s) of the monitor
# Monitors usually support multiple mode lines (here only one is configured.)
modeline "1600x1024d32" 103.125 1600 1600 1656 1664 1024 1024 1029 1030 HSkew 7 +hsync +vsync
# The modeline is composed of the following fields:
# * The identifier of the mode. In this example it is "1600x1024d32"
# * The dot clock (pixel) clock rate for the mode in Mhz. For this
# example it's 103.125 Mhz.
# One over this quantity is roughly the amount of time the pixel is
# illuminated by the electron beam in a CRT monitor. This number is mainly
# a function of the number of horizontal pixels, number of vertical pixels
# and refresh rate. It is also affected by the horizontal and vertical
# retrace times of the beams.
# See this webpage for how to calculate the dot clock rate.
# * The next four numbers refer to the horizontal timings of the mode
# 1600 1600 1656 1664
# | | | |___________
# | | |__________ |
# | | | |
# hdisp hsyncstart hsyncend htotal
# * The next four numbers refer to the vertical timings of the mode
# 1024 1024 1029 1030
# | | | |___________
# | | |__________ |
# | | | |
# vdisp vsyncstart vsyncend vtotal
# * The rest of the line are options, which for this example include the
# +HSync and +VSync (positive polarity for HSync and VSync signals)
# Note, CRT monitors, supported the VESA standard modes, which made it
# unnecessary to specify modesline. Today's high resolution flat panels
# support many additional modes which X can detect automatically.
EndSection
# Section "Device" specifies a graphics card and is referenced below in
# Section "Screen"
Section "Device"
Identifier "Videocard0" # Identifier used to reference video card
Driver "i128" # The driver used by X to drive the card
VendorName "Number Nine" # (Optional) Video card vendor
BoardName "Number Nine Revolution IV (T2R4)" # (Optional) Video card name
EndSection
# The screen as specified in "Server Layout" section
Section "Screen"
Identifier "Screen0" # Identifier used to reference screen
Device "Videocard0" # Device (graphics card) associated with this screen
Monitor "Monitor0" # Monitor associated with this screen
DefaultDepth 24 # Number of bits per pixel
DefaultFbBpp 32
SubSection "Display"
Depth 24
FbBpp 32
Modes "1600x1024d32"
EndSubSection
EndSection
# An optional section providing information for "Direct Rendering
# Infrastructure" (see this Wikipedia article for more about DRI.)
Section "DRI"
Group 0
Mode 0666
EndSection
# -- END of xorg.conf --
------------------------------------
| Xrandr - changing modes on the fly |
------------------------------------
xrandr can be used to set the size, orientation and reflection of the output
for a screen. It can also set the screen size.
Some usage examples:
* Output information on current configuration
$ xrandr
or
$ xrandr --current
* Flip screen upside down
$ xrandr -o inverted
* Switch back to normal screen mode
$ xrandr -o normal
* Rotate screen
$ xrandr -o left
* Have a little fun
xrandr -o inverted; sleep 5; xrandr -o normal
Xrandr can also be used to change video modes on the fly. A video mode is
a specification of resolution, screen refresh rate, color depth, and more
(see subsection "xorg.conf" above.)
In the following example I switch between resolution modes.
First I get a list of available modes:
$ xrandr
HDMI-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 510mm x 290mm
1920x1080 60.00*+
1680x1050 59.88
1280x1024 75.02 60.02
1152x864 75.00
1024x768 75.03 60.00
800x600 75.00 60.32
640x480 75.00 59.94
720x400 70.08
To change to a resolution mode of 1680 x 1050 pixels
$ xrandr --output "HDMI-1" "1680x1050"
In a second example, I use the following command invocation to switch back to
full screen resolution mode on my laptop after it has been set to 1024x768 mode
when connected to a projector:
$ xrandr --output "LVDS1" --mode "1440x900"
For additional help and a list of options
$ xrandr -help
or
$ man xrandr
--------------------
| Creating new modes |
--------------------
If none of the modes you wish to use are listed by xrandr, then you can generate
new modes. To do so, follow the procedure set forth in this example, and
substitute your own values. This example assumes a VGA driver.
Step 1: Use cvt command to generate a mode line for the desired resolution
assuming a VGA driver.
$ cvt 1600 900 # 1600=horizontal resolution, 900=vertical resolution
The cvt command calculates the VESA mode line for the desired resolution.
It outputs the following two lines (the first line is informational and the
second line is the mode line):
# 1600x90 59.95 Hz (CVT 1.44M9) hsync: 55.99 kHz; pclk: 118.25 MHz
Modeline "1600x900_60.00" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync
Step 2: Add a new mode corresponding to this modeline
$ xrandr --newmode "1600x900" 118.25 1600 1696 1856 2112 900 903 908 934 -hsync +vsync
Step 3: Add mode to specific monitor output (e.g. Virtual-1)
$ xrandr --addmode Virtual-1 "1600x900"
Step 4: Select this to be the active mode (X should now switch into this mode)
$ xrandr --output Virtual-1 --mode "1600x900"
------------
| Starting X |
------------
The way X is launched depends alot on how you installed or configured your
Linux/Unix system.
If the installation includes an X based session manager (e.g. xdm) or an X
based desktop environment (e.g. LXDE, Xfce), the installer will configure the
boot process to launch X during the latter stage of the boot process, after
which a graphical login screen appears.
(Note, many Linux distributions come with the GNOME desktop environment, which
uses Wayland by default rather than X.)
In a more basic or customized installation the boot process ends with a standard
text terminal display and a login prompt.
To start X, login and type
$ startx
Startx is a script that calls the xinit program, and supplies it with all
the necessary arguments to get a basic X server instance up and running.
Although, the user can invoke xinit directly, one needs to know what they are
doing to get anything meanigful to happen.
Xinit (called within startx) starts the X server (/usr/bin/X), and looks for
the file .xinitrc in the user's home directory. This file is important,
as it contains a sequence of actions and programs to launch after the X server
has started. Note, if a local .xinitrc file is not found, a global xinitrc
file is search for (e.g. /etc/X11/xinit/xinitrc).
Below is a sample .xinitrc file.
# -- Begin .xinitrc --
# Set login file
xlogin=$HOME/.xlogin.sh
# Add font path(s) (ones that are not auto-configured or specified in xorg.conf)
xset fp+ $HOME/lib/fonts/Hfonts
xset fp rehash
# Set display background color or pattern to black
xsetroot -solid black
# Set screen to blank after 300 seconds (5 minutes)
xset s on
xset s blank
xset s 300
# Run the fvwm2 window manager (see Window Manager for more on fvwm)
fvwm2 &
# Record DISPLAY variable
echo $DISPLAY > $HOME/.X_loginhost
# Sleep a bit, until fvwm starts up
sleep 2
# Open up xterm (in console mode)
xterm -title console -n console -sb -sl 200 -tn xterm \
-en UTF-8 \
-background white -foreground black -geometry 80x14+10+5 -sl 200 -C
# Clean up
rm $HOME/.X_loginhost
# -- End .xinitrc --
This file is merely a shell script that gets processed after the X server is
up and running. One of the important steps in the script is to launch a window
manager. In the .xinitrc script above the fvwm window manager is launched
(see below for more about it.) If a window manager is not launched, windows
will not have borders, control buttons, nor could they be moved around, and the
last window opened will conceal windows or portions of windows underneath it.
Nor will there be menus. Not very useful!
The next to last thing the script does is to open up an xterm terminal emulator
in console mode (-C option puts it in console mode, which means that system
messages get passed to it and displayed). To exit out of the X session, simply
close up the console terminal by either pressing the close button provided by
the window manager, or by typing "exit" in the xterm window.
To have .xinitrc open up automatically other programs simply insert prior to
the final xterm command the desired commands.
For example
$ xterm -title terminal1 &
$ xfig &
$ xpdf &
Notice, these commands are launched as background jobs, whereas the final xterm
in the script above is not. The reason for this is that if run as foreground
jobs, then xfig will only open after the first xterm is closed. Similarly xpdf
will only open after xfig closes.
Technically you could also run the final xterm (console) as a background job,
but then you will need to have something to keep the execution of the script
from terminating, since as soon as the .xinitrc script completes, xinit
will end the X session.
You could put something like
sleep 1000000000
at the end of .xinitrc. The sleep command will complete only after 1 billion
seconds, which is about 31 years. In such a case, to practically close down the
X session you will have to have configured some menu item in your window
manager to end the X session (instead of waiting for .xinitrc to complete.)
-----------
| Terminals |
-----------
/etc/termcap is a terminal database describing the capabilities of
character-cell terminals and printers.
It is superceded by terminfo databases which are more comprehensive
and the current standard.
They can be found in either the /etc/terminfo or /usr/share/terminfo directories.
Some common terminal emulators:
* xterm
Xterm is the classic terminal that comes with X.
Some attributes of xterm can be modified using one of three menus.
To access the menus press the CTRL key together with one of the mouse
buttons.
* Ctrl - Left mouse button
Pulls up the main menu which provides a way of modified assorted features,
such as full screen, capture keyboard and more.
* Ctrl - Middle mouse button
Pulls up VT (virtual terminal) options, such as enabling/disabling
scrollbar, reverse video, and more., reverse video, and more., reverse video, and more., reverse video, and more.
* Ctrl - Right mouse button
Pulls up font menu for adjusting font size, enabling UTF-8 fonts and titles
(useful for multilingual environments), and other font related settings.
Window and Icon name attributes can be modified from within the xterm using
the following key sequences.
* Set icon name and window title to whatever string is
ESC]0;stringBEL
* Set icon name to string
ESC]1;stringBEL
* Set window title to string
ESC]2;stringBEL
For example:
* To set window title to sftp10.0.1.1
$ echo -en "\033]2;sftp10.0.1.1\007"
* To set window title to user@host
$ echo -en "\033]0;${USER}@${HOST}\007"
Note, "n" option supresses implicit newline character, and "e" enables
black slash interpretation. Also, ESC=\003, BELL=\007).
For more refer to the Xterm title HOWTO.
Some useful options for xterm:
-fn font
e.g. "-fn 10x20" use font known by alias 10x20 (use xlsfonts command to
get a list of available fonts. Also see below for more on fonts).
-geometry geometry_specification
e.g. "-geom 80x25" opens an 80 by 25 character xterm window
e.g. "-geom 80x25+100+100" opens it such that the xterm's top left
corner is situated 100 pixels down and 100 pixels to the left of
the screen's top left corner.
-tn termtype # e.g. xterm, xterm-color, xterm-basic, xterms
For a full description of its myriad of options refer to the man page
$ man xterm
* rxvt
This is a derivative or xterm with a more modern look.
Keyboard shortcuts:
Scroll down: Shift-PageDown
Scroll up: Shift-PageUp
Increase font size: Shift Kp_Add (Keypad +)
Decrease font size: Shift Kp_Subtract (Keypad -)
* tmux
Tmux is short for terminal multiplexer.
It allows you to open many sub terminals from a terminal emulator of your
choice (e.g. xterm). But in fact it is much more than that.
Refer to this section for a more about it.
-------
| Fonts |
-------
See section Fonts below.
-----------------
| Freedesktop.org |
-----------------
Is a project to work on interoperability and shared base technology for free
software desktop environments for the X Window System (X11) on Linux and other
Unix-like operating systems (Wikipedia).
freedesktop.org was formerly known as the X Desktop Group (XDG)
Freedesktop contains a few utilities that reflect this mission statement.
One of them is xdg-settings.
For example to change default web browser:
$ xdg-settings check default-web-browser firefox.desktop
or
$ xdg-settings set default-web-browser google-chrome.desktop
For more about it see manpages
--------------------
| X Windows Settings |
--------------------
Changing X-window settings can be done with the xset command.
$ xset [options]
Without options xset will list all available options.
$ xset
This command is handy for setting volume, fontpaths, screen saver and more.
To modify and manipulate input device (e.g. keyboard, mouse) settings use the
xinput command.
* To list input devices
$ xinput
or
$ xinput --list
or
$ xinput --list --short
* To list properties of a device
$ xinput list-props ID
The ID of the input device can be obtained with one of the invocations of
the previous example. In the following examples the input device is a mouse
with ID=11
$ xinput list-props 11
* To set a property (e.g. enabling middle button emulation for a mouse)
$ xinput set-prop 11 "libinput Middle Emulation Enabled" 1
$ xinput set-prop 11 "libinput Middle Emulation Enabled Default" 1
---------------------
| Copying and Pasting |
---------------------
The traditional way of copying and pasting in X is as follows:
* Copying - Simply drag the mouse over the selected text. No need for further
action.
* Pasting - Move the cursor to where you wish to insert the text and press the
middle mouse button.
Applications such as word processors and web browsers use Ctrl-c (that's lower
case c) for copying, and Ctrl-v for pasting. They should also support X's
traditional copy paste scheme.
Newer terminal programs (such as GNOME terminal) support both the traditional X
text selection scheme, and also use Ctrl-C (that's upper case c) to copy text,
and Ctrl-V (that's upper case v) to paste text. If you wish to paste into an
xterm (which doesn't recongize these key combinations for the purpose of
copying and pasting) press the middle mouse button as usual. Alternatively
press Shift-Ins.
Also expect some confusing behavior when working with copied text. For instance
if you copy text in the GNOME terminal using Ctrl-C, and then select a different
text segment using the mouse, then pressing Ctrl-V pastes the first text
segment, whereas pressing the middle mouse button pastes the second text
segment.
xsel is a utility that allows you to place text into the copy buffer.
The text can then be pasted using the middle mouse button.
(For more refer to this webpage.)
Examples:
* Replace contents of file into selection clipboard
$ xsel < file
* Append contents of file into selection clipboard
$ xsel --append < file
* Place contents of selection clipboard into file
$ xsel > file
With a little creativity you can define some time saving aliases or functions
using this command. For example the following alias, pwdx places the
current directory in the selection buffer
$ alias pwdx='pwd | xsel'
If you now open a terminal and want to change to that directory, simply type
"cd" and press the middle mouse button.
* To refresh the X-screen (useful when appearance of screen has been messed up)
$ xrefresh
* To automate mouse and keyboard actions, including moving and resizing windows.
$ xdotool ...
(see man page)
* To interact with X window managers
$ wmctl ...
Can move windows from desktop to another, resize them, display info about
window manager and more.
(see man page)
* Get info on window
$ xwininfo
Note, don't except X tools to work in Wayland.
----------
| Keyboard |
----------
If you are looking to know more of what happens behind the scenes when you
press keys on the keyboard, then the following utilities may be useful.
* To show key events
$ sudo showkey
For every key pressed or released this utility shows the event in standard
output.
* Dumpkeys writes to standard output the contents of the keyboard driver's
translation tables, in the format specified by keymaps
$ sudo dumpkeys
The translation tables specify how the keypress is interpreted with and
without modifiers.
* If you want to modify the result of pressing a key use xmodmap.
$ xmodmap
(from man page)
The xmodmap program is used to edit and display the keyboard modifier map and
keymap table that are used by client applications to convert event keycodes
into key symbols. It is usually run from the user's session startup script to
configure the keyboard according to personal tastes.
* To modify keyboard behavior on the console (not X) use loadkeys utility.
See man page, as well as example in this page.
For more on this utilities see their respective manpages.
------------------------
| Keyboard Accessibility |
------------------------
Keyboard accessibility refers to modifying keyboard behavior in a way that
facilitate's usage by people with various disabilities. For example, people
who have difficulty pressing a modifier key (e.g. shift, Ctrl) together with
another key to produce a capital letter or a CTRL signal (e.g. Ctrl-C) can use
a feature called sticky keys, whereby when a modifier key is pressed
it is applied to the next key stroke even though the werent' pressed together.
Sticky keys is just one accessibility feature that modifies the way the system
responsds to keyboard events.
If working with a desktop environment, there is likely to be a control panel
applet for turning on and off and adjusting various accessibility settings.
X Window System offers a number of keyboard access function (called accessx):
* StickyKeys - Enables latching of modifier keys until another key is pressed.
* MouseKeys - Enables mouse movement using the keyboard.
* RepeatKeys - Adjust how fast keys are repeated. Can be helpful to people
who have a hard time releasing a key.
* SlowKeys - For people who have difficulty pressing just one key, this
feature places a delay before key events are passed on to the requesting
application, giving them opportunity to release unintended keys.
* BounceKeys - This feature is intended for people with tremors, and as a
result may strike a key two or three times in succession. With this feature
the system waits a certain amount of time before it accepts the
next keystroke.
For more refer to this webpage.
For implementing Sticky keys and other accessx features you can use the X
utility xkbset. xkbset is available in Fedora as xkbset package.
Usage examples:
* Display current state of xkbset
$ xkbset q
* Enable sticky keys and slow keys activated by 5 shifts
$ xkbset accessx
or use shorthand
$ xkbset a
* Enable stickey keys
$ xkbset sticky
or shorthand
$ xkbset st
If two keys are pressed together sticky keys will be turned off.
* To disable turning off of sticky keys when two keys are pressed together
$ xkbset -twokey
* To undo -twokey option
$ xkbset twokey
* To set some expiry settings
$ xkbset exp 1 =accessx =sticky =twokey =latchlock
exp=expires -- tells the options that follow not to expire
???Note, use "-a" before the option to disable (e.g. xkbset -a)
See also this page.
For other methods of enabling accessibility features see this Archwiki article.
-----------------
| Keyboard Layout |
-----------------
It is often necessary to configure a system for multiple keyboard layouts.
This is particularly true when working in a multi-lingual environment, but may
also be the case when composing or editing scientific documents containing
frequent Greek and other mathematical symbols. The ability to easily switch
between layouts in this case greatly simplifies usage.
If using a desktop environment, it is usually possible to configure additional
keyboard layouts through the control panel. The OS installer may also offer
the option to configure the system for multiple keyboard layouts.
If you do not use a desktop environment or you would like to add and change
keyboard layouts using a lower level procedure, then continue reading. I
illustrate this for Hebrew, but the idea is the same for other languages.
Entering Hebrew characters requires a different keyboard layout than the
standard default US layout. X-Windows has a very sophisticated keyboard
manager. The manager reads the signals sent from the keyboard, translates them
into a keycode, which is then processed into a keyboard event. The keyboard
event is then read by the relevant application(s). When X is configured for the
first time, the configurator tool determines which kind of keyboard the host
computer is connected to. The keyboard manager is then able to properly map
between keys pressed and the symbol they represent.
For instance if the key "A" is pressed then the keyboard manager will evoke a
keyboard event, whereby indicating an "a" is desired. But it does more than
that: it detects when combinations of keys have been pressed, in particular a
key pressed together with modifier keys. For example, if the key "A" is pressed
together with the shift key the keyboard manager will evoke a keyboard event
that indicates an "A" is desired.
From the actuation of a keystroke to the generation of a keyboard event, a
multilayed process is initiated, involving various mappings and rule
application algorithms. All this is highly customizable in X through
configuration files and xorg.conf (the X configuration.)
The way a keyboard is interpreted by the keyboard manager is determined by a
layout file. The default layout is "us": this layout describes the basic US
alphabet and ASCII mapping. For Hebrew, a different layout exists, designated
"il". For greek its "gr" and for French its "fr". The layout definition files
are located in the directory XROOT/xkb/symbols, where XROOT could be
/usr/share/X11/ or /usr/X11R6/lib/X11 or some other path (different distributions
put X stuff in different paths).
The layout definition file for "il" is named XROOT/xkb/symbols/il.
The layout contains the basic Hebrew alphabet, punctuation, numbers, English
capitals and some of the control keys such as insert, delete.
In order to configure X to switch between layouts, the Keyboard section in
xorg.conf has to be modified. For example:
Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
Option "XkbModel" "pc105"
Option "XkbLayout" "us,il"
Option "XKbOptions" "grp:alt_shift_toggle"
EndSection
(see subsection "X Server configuration - xorg.conf" for more about xorg.conf.)
The same can be accomplished without a static xorg.conf file with the setxkbmap
command, as such:
$ setxkbmap -rules xorg -layout "us,il" -option "grp:alt_shift_toggle"
This can be done during runtime, or by placing this command line in .xinitrc.
Note, although most modern applications are internationalized, some legacy X
applications are not, and will not recognize the non-English input symbols.
That is, they ignore keyboard events not associated with standard English
characters.
Another way to configure keyboard layouts or provide multiple layouts is by
editing the file /etc/default/keyboard.
For example for interchangeable us and il layouts the file should be
XKBMODEL="pc105"
XKBLAYOUT="us,il"
XKBVARIANT=""
XKBOPTIONS="grp:alt_shift_toggle"
BACKSPACE="guess"
----------------------------------
| Bidirectional typesetting - BIDI |
----------------------------------
Bidirectional (abbreviated BIDI) typesetting is necessary for Semitic languages
such as Hebrew and Arabic which read right-to-left. This is not particularly
an X feature, but I discuss it here as it relates to previously discussed
topics.
Bidirectional typesetting is a function of the application. A window manager
which features biderectional typesetting will display correctly window titles
and menus that have Hebrew text in logical order (i.e. not in visual order).
FVWM version 2.5 (a window manager) supports full internationalization and
therefore also BIDI. The more common desktop evironments such as GNOME and
KDE, have long supported it.
Individual applications such as a terminal or wordprocessor have to support
BIDI inorder that Hebrew text display correctly. Terminals that don't
presently support BIDI are xterm and rxvt. If using one of the terminals
supplied with GNOME or KDE then you are likely to encounter BIDI support. For
instance "konsole" supports BIDI (the option has to be activated in the
configuration dialog box). However, I have found the support to not always be
satisfactory when English and Hebrew are mixed together.
I have found Mlterm to have the best BIDI support, but it is not readily
available on many distributions.
Specific troubleshooting advice for Mlterm:
* Hebrew (UTF8) characters not displaying
Check if font supports utf8 Hebrew characters (e.g. devjavu sans).
* Hebrew characters displayed, but cannot type them in.
This is probably a locale issue. Try
$ export LC_ALL="en_US.UTF-8"
GNOME Terminal 3.33.3 (and persumably onwards) has very good BIDI support.
See here for more.
Various office suites support BIDI as well, such as Libre Office, Apache
Openoffice, Staroffice (discontinued), and the like. Mozilla Firefox supports
BIDI as well.
Programs such as VIM or the alpine mail agent which work with the terminal on
which they are run, are subject to the terminal's capabilities. If they are
run in a terminal that supports BIDI they will display Hebrew text in the
correct direction.
VIM, however, does support Hebrew and Arabic in a way that's indpendent of the
terminal and requires only the standard us keyboard layout. For instance you
can launch VIM in Hebrew mode:
$ vim -H
For a more customized configuration of vim and Hebrew see VIM Essential,
subsection "Hebrew".
--------------
| Screen saver |
--------------
The default screen saver for X-windows is called "xscreensaver". Other screen
savers may be active, in which case xscreensaver will not be the operating
screen saver. For instance, if the OS is configured to launch GNOME, then
GNOME's screen saver will be what is activated. See man page for xscreensaver
on how to enable xscreensaver when the installation defaults to another screen
saver.
Use xscreensaver-command or xscreensaver-demo to configure xscreensaver,
or simply enter settings in .Xdefaults or .xscreensaver (see man page for
details).
-----------
| Resources |
-----------
Traditional X client programs (e.g. xterm, xscreensaver, xpdf, xfig) can be
configured using X resources. X resources are merely parameters stored on the
X server that specify different attributes that are meaningful to the client
program for which they are specified. For example the font to use within a
window, menu or button for a given application are configurable via X
resources. Similarly, configuration attributes such as font size,
background/foreground color, window title, can all be specified as resources.
An X resource is a string having the format
application.component.subcomponent.subcomponent.attribute: value
Wild cards can be used in eliminiating the need to specify all the fields.
For example the resource specification
XTerm*scrollBar: True
tell the xterm program that the default behavior is to place a scroll bar
on the side. It is possible to override this behavior by passing an argument
to xterm to suppress the scrollbar
$ xterm +sb
Resources are interpreted by the client, and therefore, it's not incumbent upon
the X server to verify whether a given resource is valid or not (in fact the
X server has no way of deciding whether a resource is valid, since new features
and associated resources may be introduced into a client which the X server
would be unaware of.)
The client program queries the X server for resources targeted for itself.
For example the xterm program queries the server for resources of class
XTerm. If a resource that is meanless is returned to it, it ignores that
resource. Otherwise it configures the program to that value specified by the
resource element (unless, as mentioned before, the given attribute which the
resource represents is overriden by a command line argument).
Normally a user specifies X resources in the file .Xresources (located at
the root of his home directory.) The file .Xdefaults is sometimes used
instead.
To load resources onto the server, use the command
$ xrdb -load $HOME/.Xresources
or
$ xrdb -load $HOME/.Xdefaults
This command can be placed in the .xinitrc file (see subsection "Starting X"
for more about this file).
See man page for more about xrdb's options.
The command appres lists all the resources that an application of type "class"
(e.g. XTerm, XClock) will see. The basic invocation is
$ appres class
where class refers to the kind of client application.
For example to list all the resources that an xterm client will see, issue
$ appres XTerm
To list resources currently stored in memory use the command
$ listres
In Sparc, application resource files can be found in
/usr/openwin/lib/app-defaults
---------
| Xsession|
---------
Before continuing it's advisable to read the section on Display Manager.
In Debian distributions a special shell script is launched with the start
of an X session, either through startx or xdm. This script is called
Xsession.
The following is a quote from the man page for Xsession:
/etc/X11/Xsession is a Bourne shell script which is run when an X Window System
session is begun by startx or a display manager such as xdm. Xsession is used
in Debian systems. Options can be set with: /etc/X11/Xsession.options
The following is a quote from the man page for Xsession.options:
/etc/X11/Xsession.options contains a set of flags that determine some of the
behavior of the Xsession Bourne shell script.
See the Xsession manpage for more detailed information.
For more about Xsession refer to this Debian webpage.
--------------------------------------------
| Multihead configuration (multiple screens) |
--------------------------------------------
See this Archiwiki article on Multihead.
----------------------------------------------------
| Running graphical applications on a remote machine |
----------------------------------------------------
If you would like to run a graphical application on a remote machine and
displayed locally, you would normally log into the remote machine using an
"ssh -X" command and follow the instruction described earlier in subsection
"Client Server Model".
However, in doing this, as soon as you log out of the remote terminal the
remote application will also quit. If you need to log out of the terminal but
have the application continue running (e.g. have the Octave program run a
lengthy numeric simulation), then use the following method:
* Install package "screen" in remote host.
* Login to remote host
(remote)$ ssh -X jdoe@10.1.1.1
* In remote host, issue screen command
(remote)$ screen
* Set DISPLAY variable
(remote)$ export DISPLAY=:0.0
* Run application (it should popup locally)
(remote) $ app_to_run
* Type Ctrl A D
$ [Ctrl A]
$ [D]
The latter key sequence "Ctrl A D" detaches the remote running program
and quits the local terminal.
That is, app_to_run will continue to run on jdoe@10.1.1.1 even after
quitting the terminal from which you launched the command.
Note, for non-graphical applications there is no need to apply the -X option
in ssh, nor set the DISPLAY variable.
------------------------
| SGI1600FP (Flat Panel) |
------------------------
This is one of the first popular flat panel displays for desktop computers. At
the time of its introduction by SGI in 1998 it was considered ahead of its time,
and remained competitive for many years after. It supports a resolution of
1600 by 1024 pixels. Its major drawback is the use of a non-standard digital
video interface to connect the graphics card to the monitor. To use with a PC
it requires the the Number Nine Revolution IV graphics card (the company stopped
producing the card in 1999.) Alternatively, several companies produced adapter
cards to interface the monitor to DVI and VGA cards. The monitor is not plug
and play. That is, the monitor must be powered on and the video card connected
to the monitor before the computer is powered up, otherwise the screen will
remain blank. If you own this legacy monitor and would like to use it then read
further.
X uses the i128 driver (/usr/lib/xorg/modules/drivers/i128_drv.so) to drive the
Number Nine graphics card. If X's autoconfiguration doesn't identify the
graphics card correctly or uses the wrong mode lines, then you'll need to
manually configure X by creating an /etc/X11/xorg.conf file and place the
following lines into it.
Section "Device"
Identifier "Number 9 Computer Company Revolution 4"
Driver "i128"
#BusID "PCI:1:0:0" # Doesn't seem to work, nor is necessary
EndSection
Section "Monitor"
Identifier "SGI Panel"
VendorName "Silicon Graphics"
ModelName "1600SW"
HorizSync 27.0-96.0 # kHz
VertRefresh 29-31, 50.0-80.0, 119-124 # Hz
Option "DPMS"
# Taken from this webpage
#
Modeline "1600x1024d32" 103.125 1600 1600 1656 1664 1024 1024 1029 1030
HSkew 7 +Hsync +Vsync
Modeline "1600x1024d16" 103.125 1600 1600 1656 1664 1024 1024 1029 1030
HSkew 5 +Hsync +Vsync
Modeline "1600x1024d08" 103.125 1600 1600 1656 1664 1024 1024 1029 1030
HSkew 1 +Hsync +Vsync
Modeline "800x512d32" 54.375 800 800 840 848 512 512 514 515 HSkew 7
DoubleScan +Hsync +Vsync
Modeline "800x512d16" 54.375 800 800 840 848 512 512 514 515 HSkew 5
DoubleScan +Hsync +Vsync
Modeline "800x512d08" 54.375 800 800 840 848 512 512 514 515 HSkew 1
DoubleScan +Hsync +Vsync
# 1600x1024 @ 60.00 Hz (GTF) hsync: 63.60 kHz; pclk: 136.36 MHz
Modeline "1600x1024_60.00" 136.36 1600 1704 1872 2144 1024 1025 1028
1060 -HSync +Vsync
Modeline "1600x2048d32i" 103.125 1600 1600 1656 1664 2048 2048 2058 2059
HSkew 7 Interlace +Hsync +Vsync
Modeline "1600x2048d32" 103.125 1600 1600 1656 1664 2048 2048 2058 2059
HSkew 7 +Hsync +Vsync
EndSection
Section "Screen"
Identifier "Default Screen"
Device "Number 9 Computer Company Revolution 4"
Monitor "SGI Panel"
DefaultDepth 24
SubSection "Display"
Depth 24
Modes "1600x1024d32" "800x512d32"
EndSubSection
EndSection
-------
| Sparc |
-------
To startup Open-Windows on Sparc 5/10/20, issue the command
$ /usr/openwin/bin/openwin
Openwin must not see the an .xinitrc file in the user's home directory,
otherwise it will open up X Windows instead of Openwindows. Renaming .xinitrc
as no.xinitrc will do the trick.
Specifying colors:
rgb:xx/yy/zz where xx,yy,zz are hexidecimal numbers specifying an rgb triplet
An X-windows graphical login program:
$ gmd
---------------------------------
| Important directories and files |
---------------------------------
Work in progress.
-----------------
| Troubleshooting |
-----------------
If emulation of the middle button of a mouse doesn't work (ie. pressing
left and right buttons simultaneously) then you can use xinput to enable
this feature. See here for more.
Wayland
********************************************************************************
* - Wayland -
********************************************************************************
Wayland is a new windowing protocol for Linux that is intended to replace
X-Windows. For a good description of it see this Archwiki article on Wayland.
Also see this Wikipedia article.
Wayland is only a library implementing a communication protocol between a
display server and its clients. By itself it does nothing. To replace the X
Server a display server or compositor (e.g. GNOME, Weston) is required.
In order to make the transition between X-Windows and Wayland smoother, Wayland
supports an X11 combatibility layer called XWayland. XWayland is a Wayland
client that supports native X11 library calls. Applications that were designed
to run on X-windows (e.g. xeyes, xfig), make X11 library calls. These calls
are directed to XWayland which in turn renders the corresponding graphics on
a Wayland compositor.
Note, many applications today use either the GTK or the Qt cross platform
libraries to render windows and graphics on graphical displays, rather than
making direct calls to an X Server. So whether such an application will end up
using X-Windows or Wayland depends on which of the two the GTK or Qt wrappers
are using.
To test whether a particular application uses X11 or Wayland simply install
and open the xeyes utility. Place the cursor on the application's window,
move it around and see if the eyes follow the cursor around. If so, then your
application is using X11, and if not then it's using Wayland.
At present X-Windows is still the preferred graphics engine behind many Linux
distributions. The Gnome desktop, however, has migrated to Wayland,
so many distributions that ship with Gnome use Wayland by default rather than
X-Windows.
Whether the system is using X-Windows or Wayland will normally not be of concern
to ordinary users, or even to software developers.
Fonts
********************************************************************************
* - Fonts -
********************************************************************************
Note, this section does not deal with texmf fonts (see Latex section for more
about that.)
------------
| Fontconfig |
------------
Font management and font rendering in Linux/Unix has evolved significantly
over the years. Font software involves two systems
* A system for querying and matching fonts
* A system for rendering fonts on a screen
The most current system for managing and querying fonts is fontconfig.
It is a set of libraries designed for configuring and customizing font access.
Applications make library calls to obtain a list of available fonts. Fontconfig
also handles font configuration and rendering. When certain characters are not
available in a given font it will provide font substitution.
Most modern applications use fontconfig to handle font selection.
See this freedesktop.org webpage
for more about it. Also see the man page
$ man fonts-conf
Some useful commands
To list all fonts that fontconfig knows about
$ fc-list
Initial directories which fontconfig searches in a rescursive manner are:
/usr/share/fonts/, ~/.local/share/fonts.
To list the file names of fonts fontconfig knows about:
$ fc-list : file
To list the families of fonts fontconfig knows about:
$ fc-list : family
To list the styles of fonts fontconfig knows about:
$ fc-list : style
To list the spacing of fonts fontconfig knows about:
$ fc-list : spacing
Fontconfig can be configured systemwide with /etc/fonts/fonts.conf,
or locally with $XDG_CONFIG_HOME/fontconfig/fonts.conf
For more details see Font configuration
----------------
| Types of fonts |
----------------
There are two classes of fonts: bitmap and outline/vector
* Bitmap fonts are stored as a pixel raster these fonts are not inherently
scalable. And when scaled they display poorly.
* Outline fonts are described and rendered by mathetical commands.
Postscript, ttf and otf fonts are all examples of such fonts.
These fonts are scalable, and sometimes contain information on how to adjust
font sizes in a non-proportional way so as to appear more pleasing to the eye.
The following is a (partial) list of font types and associated file extensions.
* .bdf - Glyph Bitmap Distribution format.
This is a bitmap font that used to be used in X.
For more see this Wikipedia entry.
* .pcf - Portable Compiled Format.
These bitmap fonts have replaced bdf fonts in the X Window system. These
are mostly used in terminals like xterm.
For more see this Wikipedia entry.
* .psf, .psfu - PC Screen Font.
These are bitmap screen fonts used by the Linux kernel for the console
The "u" in the psfu extension signifies Unicode.
For more see this Wikipedia entry.
* .pfa, pfb - Printer Font ASCII and Printer Font Binary (a=ascii, b=binary).
These are Postscript outline fonts.
For more about Postrcipt fonts see this Wipiedia article.
* .afm - Adobe metric font.
These font metric files acompanny .pfa and .pfb files. They contain kerning
and ligature information about the fonts.
* .ttf - True Type Fonts.
This system of fonts was developed by Apple in the late 1980s.
It was originally intended as a replacement for postscript fonts.
It is mainly used in Macs and MS Windows.
* .otf - Open Type Fonts
These are true Type Fonts with postscript typographic info.
For more see this Wikipedia article.
-------------
| X and Fonts |
-------------
X comes with its own font management and font rendering software.
Although modern applications use the fontconfig library to query and select
fonts, some legacy software still uses X's font selection scheme.
The latter is referred to as the core fonts system.
The more modern system is called the Xft font system.
Both will be discussed.
X provides two sub systems that deal with the core fonts system.
* An X font server - xfs
xfs provides X clients with fonts.
According to Wikipedia, as well as the man page, xfs is depricated in
favor of client-side fonts, in which the client (e.g. application) renders
fonts with the aid of the Xft2 or Cairo libraries and the XRender extension.
In systems configured to use xfs, it is normally brought up as part of the
boot process, although it can be launched as a private font server.
Some font server related commands:
xfs .................. X font server executable
mkfontdir ............ Creates "fonts.dir", used by font server to find fonts
bdftopcf ............. Converts X font from BDF to PCF
fsinfo ............... Provides information on X font server
fslsfonts ............ List fonts served by X font server
fstobdf .............. Generate bdf font from X font server
* X logical font description (XLFD)
This is a font standard used by X to organize, query and select fonts.
Each font has a unique font name constructed from a hyphen delineated
fourteen field string. For example
-adobe-utopia-regular-r-normal--33-240-100-100-p-180-iso10646-1
A short descriptions of some of the fields:
(1) Type foundary
The company that distributes/sells these fonts.
In the example its adobe.
(2) Type family (utopia)
The name given to the particular font in the foundary.
(3) Weight
The degree of "blackness" or "fullness" in the character
Common values are regular, medium, bold
(4) Slant
Common values are upright (r) and italic (i)
...
(13) Charset registry
In the example its iso10646, which is UNICODE.
Another common value is iso8859.
(14) Character encoding
In the example its 1
Refer to this Wikipedia article for more.
The font may have a short name (alias). For instance the font
-misc-fixed-medium-r-normal--20-200-75-75-c-100-iso8859-1
can be selected by its alias 10x20.
The utility xfontsel allows you to select or identify a font based on its
unique name. In this utility, each of the fourteen fields in the unique name
comes with a drop down menu. For each field you either select a value or
select the wild character "*" (the wild character means leaving the choice open
to any of the possible values for that field.)
For example in the foundry menu you select "misc". In the family menu you
select "fixed". And so forth. If a selection was made for every field, then
only one font will match, and select characters for that font will be
displayed. If some fields were left as "*" then more than one match may be
possible. The number of matches will be listed on the top right.
Other useful font querying utilities:
xlsfonts ................ Lists all fonts
xlsfonts -fn myfont ..... List myfont
xlsfonts -l -fn myfont .. List myfont with some properites
use -ll and -lll for additional verbosity.
xfontsel ................ Gives a sample display of any of the X-window fonts
xfd -fn font ............ Displays a table of all characters in font
To add a font to the font path use the command
$ xset fp+ ; xset fp rehash
Note, when adding a directory to a font path, make sure all directories
leading to the font directory have global read and execute permissions,
and that all font files have global read permissions. Use chmod a+rx *
to make it such, if it is not already.
To automatically add a fontpath this command can be placed in the .xinitrc
file.
-----------------
| Xft Font System |
-----------------
The more modern way to render fonts is using the Xft font system.
Xft relies on the fontconfig library to configure fonts.
(Note, the fontconfig library is not specific to X11.)
To get a list of fonts managed by fontcofig on your system, issue the command
$ fc-list
To add local fonts, place the fonts in the directory ~/.fonts/
To trigger fontconfig to update its list of fonts, issue the command
$ fc-cache
Note, fontconfig will eventually notice the added fonts and update the font
list of its own accord.
The confconfig configuration file is /etc/fonts/fonts.conf.
In it you will find among other things, the locations where fontconfig loks
for fonts.
Some applications, such as xterm, often use the core fonts system by default,
so in order to use the Xft system with these fonts you must configure the
application to do so. This may differ from one application to another.
For example, to configure xterm to use Courier, use the option -fa
$ xterm -fa "Courier"
Alternatively, to make xterm use Xft's "Courier" by default, edit the file
.Xresources and enter the line
XTerm*faceName: Courier
(For more about the .Xresource file see here)
For more about using Xft in X11 read this.
---------------
| Miscellaneous |
---------------
* .conf configuration files
Many ".conf" files can be found in /etc/fonts/conf.d
Some examples of how they can be used:
To disable bitmap fonts
$ ln -s /etc/fonts/conf.avail/70-no-bitmaps.conf /etc/fonts/conf.d/
To disable scaling of bitmap fonts
$ rm /etc/fonts/conf.d/10-scale-bitmap-fonts.conf
* If creating or editing multi-lingual documents, it's a good idea to select a
font with wide unicode coverage. For example I've encountered a problem with
xournal where the available fonts did not include Hebrew characters.
Fontconfig provided xournal with unscalable substitutes that didn't increase
in size when zooming in and out, and scaled like a bitmap when exporting to
pdf.
Note, I can use ~/.xournal/config to configure default fonts in xournal.
The deja-vu family has wide unicode support, and is thus a good choice.
* Xorg may have fonts of its own that are not in the search path of fontconfig.
See /etc/X11/fontpath.d
* Monospaced fonts are used mainly for terminals and displaying programming code
or code sniplets. Your installation probably came with monospaced fonts
(e.g. Dejavu Sans Mono) so there is usually no need to install them.
Some monospaced fonts available in the Fedora repository are
* Inconsolata: levien-inconsolata-fonts
* Source Code Pro: adobe-source-code-pro-fonts
* Fira Mono: mozilla-fira-mono-fonts
* Droid Sans Mono: google-droid-sans-mono-fonts
* DejaVu Sans Mono: dejavu-sans-mono-fonts
(For more see this Fedora Magazine article)
* Hack: see (Hack on github)
* UTF editor
A UTF editor called yudit was useful in the early days of Linux when
utf8 support was scarce. I used it at the time for displaying and editing
emails in UTF-8 and other encodings.
For more see yudit homepage.
* Font editor software
pfaedit is an outline font editor.
It can be used to create outline fonts or modify existing ones.
It has since changed its name to FontForge.
Display Manager
********************************************************************************
* - Display Manager -
********************************************************************************
The display manager is the login interface and session launcher.
Note, do not confuse a Display Manager with a Desktop Environment or a
Window Manager. These are discussed in separate sections (see TOC).
In its most basic form a display manager provides a graphical login service
and launches a graphical session. There are a number of display managers
available for Linux, and most non-server Linux distributions come with a
default display manager. Examples are:
* GDM - Display manager that comes bundled with GNOME
* KDM - Display manager that comes bundled with KDE
* LXDM - Display manager that comes bundled with LXDE
* SDDM - Simple desktop display manager
* XDM - The default display manager for X
Some more basic or cutomized Linux installations do not come with a display
manager at all. Linux Servers, which usually don't have X or Wayland installed
by default, only offer a text console login option.
It should be noted that most display managers allow you to select which window
manager or desktop environment to launch for the given session.
That is, the desktop environment is not tied to the corresponding display
manager. So a display manager of one kind, say GDM, may launch a
desktop environment of a different kind, say KDE or LXDE.
Suppose your distribution installer installed GNOME. You may subsequently
install the KDE desktop, and use GNOME's display manager (GDM) to launch a KDE
session. It is not necessary to switch to KDM (which is KDE's display manager)
to launch a KDE session.
For a more comprehensive list of display managers for Linux refer to this
Archiwki page and this Debian page.
The display manager looks for *.desktop files in /usr/share/xsessions for
available sessions, for example fvwm.desktop, LXDE.desktop and fvwm.desktop.
These files contain instructions to the desktop manager on how to launch the
given session. Usually there is no need to create or edit these files, as the
package installer for the given xsession program provides the appropriate file.
-------------------------
| xdm - X display manager |
-------------------------
xdm is the default X display manager, although with more "modern" display
managers around, I've haven't seen it used much. For more, see man page
$ man xdm
-----------------------------
| GDM - GNOME display manager |
-----------------------------
Many installations enable GDM as the default desktop manager.
If that's not the case with your installation, and you wish to make it the
default didsplay manager, first install it (if not already installed), and
enable GDM as follows:
$ systemctl enable gdm.service
$ systemctl start graphical.target
Note, this assumes your Linux installation is using systemd (rather than SysV.)
Furthermore, if your prior setup was such that the boot ends with a text
console rather than a graphical login service, then you can change that
behavior by creating a symbolic link between default.target to graphical.target
$ ln -svf /usr/lib/systemd/system/graphical.target /etc/systemd/system/default.target
Note, if another desktop manager service is running you will not be able
to enable gdm.service. The other service must first be disabled. For example
if LXDM is currently enabled, then disable it as such
$ systemctl disable lxdm.service
To configure GDM to automatically login and open a session for a given user
you'll need to edit the file /etc/gdm/custom.conf. In this file look
for the section heading "[daemon]", and add two settings:
[daemon]
AutomaticLoginEnable=True
AutomaticLogin=jdoe
(replace jdoe with your user.)
Source: this webpage
In many distributions Wayland is used by default rather than Xorg. To specify
using Xorg add the line in the [daemon] section (or uncomment if there, but
commented):
WaylandEnable=false
To setup a single-application session follow instruction in this link.
This can be useful if, say, you want to use an old laptop as a jukebox, and you
wish to permit only a music player application to run.
---------------------
| Non-Graphical Login |
---------------------
The login interface a user encounters when booting a systemd based Linux OS
is determined by the symbolic link file /etc/systemd/system/default.target
Most Linux distros launch a graphical display manager by default, in which case
the symbolic link is as such:
default.target -> /usr/lib/systemd/system/graphical.target
The alternative is a non-graphical login interface, referred to as multi-user.
default.target -> /usr/lib/systemd/system/multi-user.target
In order to switch from one to the other use the systemctl command.
To switch to a textual login environment (i.e. using Linux' virtual consoles)
$ sudo systemctl set-default multi-user
To switch to a graphical login environment
$ sudo systemctl set-default graphical
These invocations create the desired symbolic link.
After rebooting the computer, you will be placed in the selected login
environment.
If running gnome, it is also possible to quit out of an existing gnome session
as such (although when I tried it, it refused to do so)
$ sudo gnome-session-quit
To start the gnome display manager from the command line of one of the virtual
consoles (when no other display manager is running) issue
$ sudo systemctl start gdm
Sometimes the gdm service may be named slightly differently (e.g. gdm3).
If unsure what the correct name is on your system, try finding out by looking
for service names containing the substring "gdm"
$ systemctl list-unit-files | grep -i gdm
This section is based on this webpage.
Window Manager
********************************************************************************
* - Window Manager -
********************************************************************************
In X based systems, the graphical workspace consists of a few software layers.
At the base sits the X Window System. However, this by itself does not give
the functionality of the familiar desktop environment most users are accustomed
to. In fact, without additional software, the windows provided by X will have
no borders, no titlebars, cannot be moved or adjusted, nor have buttons to
close, minimize, maximize or iconize them. All these functions must be
provided by an additional layer of software that sits on top of X called a
window manager.
In addition, a window manager may provide the user with the ability to define
* desktop menus
* taskbar(s)
* multiple virtual desktops
* a virtual screen which is larger than the physical screen
* themes
* mouse focusing methods (see below)
The fact that the window manager is separate from X has the advantage that a
user can choose one that best suits his needs. Some window managers are so
customizable that two users running the same window manager may appear to be
running two different operating systems.
One of the first window managers to come bundled with X was twm. It is
a very simple and configurable window manager. It writes directly to Xlib
rather then using a widget toolkit, making it very efficient and suitable for
low resource installations. See this Wikipedia article for more.
A far more powerful window manager that spawned off of twm is fvwm.
See below for more about it.
Some other window managers are
* Compiz (can implement 3D special effects)
* Enlightenment
* IceWM
* Metacity
* Openbox
* xfwm (of Xfce desktop),
* lxwm (the default window manager for LXDE)
* MWM (motif window manager)
See this Archwiki article for a more comprehensive list.
--------------------
| Mouse focus method |
--------------------
The mouse focus method dictates what happens when the mouse is placed over a
window. Some of the possibilities are:
* Raise the window only when clicked on. This is the most familiar focus
method.
* Raise the window (above others) automatically without having to click the
mouse button. This means that if this window is partially obscured by other
windows, it will be raised above them without requiring the click of
the mouse button. Some window managers can configure this to take place with
or without a delay.
* Capture the mouse and keyboard focus without raising the window.
This means that if typing or making a selection with the mouse over an exposed
part of the window over which the mouse cursor is situated, the mouse and
keyboard actions will act on the contents of that window.
This can be beneficial in a situation such as this:
Both an editor and browser are open. The browser window overlaps the window
of the editor (say, for lack of space on the screen). You wish to type into
the editor information about the webpage that appears in the browser. In a
traditional focus scheme whenever you wish to type into the editor, its window
must first be raised by clicking the mouse on it. The problem is that in
doing so the browser window will end up partially concealed. With this
alternative focus scheme you can type into the editor by placing the cursor
over an exposed part of the editor without causing part of the browser window
to become concealed.
The focus method is configurable in many window managers. For how to do so
refer to the documentation of the specific window manager.
FVWM
********************************************************************************
* - FVWM -
********************************************************************************
FVWM is a highly configurable window manager. I describe a few select features
here. For more comprehensive documentation refer to its man page
$ man fvwm
Also see this Archwiki wepage.
FVWM employs loadable modules to expand its features. Some basic modules are:
* Pager - allows multiple virtual desktops
* Buttons - link buttons to certain apps (e.g. clock, pager)
* Task bar - a windows type task bar with Start menu
* Audio - link sounds to certain desktop actions
Configuring FVWM is done on a per user basis using the .fvwm2rc file.
This file is very important, as it controls the appearance and behavior of fvwm.
Starting FVWM without a .fvwm2rc file will launch it using a default
configuration (e.g. /usr/share/fvwm/default-config/config).
However, it is highly recommended to personalize the configuration to suit your
own needs, as this is one of the big advantages of using FVWM.
FVWM has hundreds of commands and attributes that can be used to customize the
desktop experience, and learning how to configure FVWM via the configuration
file is essential in getting the most out of it. For more complex
configuration requirements, the configuration can be split amongst a few files,
whereby .fvwm2rc contains commands to load various configuration files. For
example
Read .fvwm/common
Read .fvwm/menus
Read .fvwm/mydesktop
Each Read command loads a configuration file pertaining to certain
aspects of the configuration.
The best place to get information about configuring FVWM is the man page.
Take note, however, the man page contain nearly 9000 lines (roughly equivalent
to a two-hundred page book).
Its best to use good searching techniques when looking for specific features.
For instance if looking for the command "Scroll", then take advantage of the
capitalized "S" in the command name, and search for the command as such:
/ Scroll
Notice that I placed a space before and after the command name. This modified
search pattern helps eliminiate landing the search at places where "Scroll" is
part of another word (e.g. EdgeScroll.) After the first occurance of the search
pattern press "n" to continue searching for the same search pattern until
arriving at the command description.
I'll highlight some useful concepts, but to truly get an appreciation for FVWM
one must peruse the man page and try it out.
------------------------------------------
| Concept of Virtual Desktop and Viewports |
------------------------------------------
For a complete description search the man page for "THE VIRTUAL DESKTOP"
(capitalized)
In FVWM it is possible to define:
(I) A "virtual desktop".
This refers to a logical screen which may or may not be larger than the
physical screen. FVWM refers to the physical screen as the "viewport".
--------------------------------
|Physical Screen | |
| is Viewport | |
|________________| |
| to |
| Virtual Desktop |
| |
--------------------------------
To illustrate how this can be useful, consider a laptop with a small physical
screen in which you wish to run apps which don't fit in their entirety into
its small screen. FVWM allows you to define a virtual desktop which is
larger than the physical screen.
The size of the virtual desktop is set using the DesktopSize command.
For example, to define a Virtual Desktop that is twice the width and twice
the height of the Physical screen place the following command in .fvwm2rc
DesktopSize 2x2
Moving or scrolling the viewport can be accomplished in one of two ways:
* When the mouse cursor is brought close to the edge of the physical screen
the viewport scrolls along the virtual desktop. The manner in which and
degree to which the movement takes place is configurable.
* Moving the viewport can be accomplished with the Scroll command.
This command can be bound to a key stroke (usually combined with a modifier
key.) For example to bind a downward movement of 100 pixels to the
Alt-Down arrow key combination place the following command in .fvwm2rc
Key Down A C Scroll 0p 100p
For more about the virtual desktop and related commands, search the man page
for section "Controlling the Virtual Desktop".
(II) Multiple desktops:
Desktops are independent workspaces that can be switched to from one another.
Desktops are useful for grouping applications in a logical way.
For example a user will use one desktop for recreational purposes,
a second for home accounting, and a third for a programming project.
In FVWM there is practically no limit to the number of desktops that can
be defined. Many commands are available in FVWM to facilitate switching
between one and another.
The GoToDesk command is used to switch between Desktops. For example,
suppose you want to define four desktops in a 2 by 2 grid, as such:
---------------
| Desk1 | Desk2 |
---------------
| Desk3 | Desk4 |
---------------
To bind the first four function keys to these desktops, insert the following
lines into .fvwm2rc
Key F1 A N GotoDesk 0 0
Key F2 A N GotoDesk 1 0
Key F3 A N GotoDesk 0 1
Key F4 A N GotoDesk 1 1
Pressing F1 places you in Desk1, F2 in Desk2, F3 in Desk3 and F4 in Desk4.
See man page for more about the GotoDesk command.
------------------------------
| Some useful commands in FVWM |
------------------------------
* Scroll - scroll the viewport within the virtual page
e.g. To scroll right two pages and down half a page.
Scroll 0.5 2
Initial position of Final position of VP
VP (viewport)
------------------- -------------------
|_____ | | |
| VP | | | _____ |
|_____| | | | VP | |
| | | |_____| |
------------------- -------------------
------------------------------
| Some useful style parameters |
------------------------------
* BorderWidth - This style attribute specifies how thick the window border
should be (in pixels). When given an argument of zero, window will have
no border.
* Handles - Place resizing handles on window.
* StaysOnTop - Always place window on top of windows which do not have this
style attribute. This is useful for a window you always want visible,
such as a clock.
* Sticky - Will cause a window to appear in all desktops. That is, switching
from one desktop to another does not make it disappear from view. This is
useful for something like a clock or a "new mail" indicator utility such as
xbiff
* StickyIcon - Same as Sticky, except for icons.
* Title - Place titlebar on window
* Iconbox - Define a region on the screen (box) where icons will be placed.
Any boolean style element (e.g. Title) preceded by a "!" symbol negates the
style attribute (e.g. !Title tells FVWM to not attach a titlebar to the window.)
Example:
Tell FVWM that the xclock application should be displayed without a
title bar, no resizing handles, persist accross desktops and have no (zero
width) border:
Style xclock !Title, !Handles, Sticky, StaysOnTop, BorderWidth 0
-------
| Icons |
-------
Iconizing a window causes the window to iconize to a bitmap or pixmap. This
bitmap/pixmap can be specified in the configuration file. If left unspecified
a default icon will be is used. The pixmap should be in one of the predefined
directory image paths. For instance,
ImagePath /usr/share/pixmaps:/usr/share/icons/wm-icons/48x48-general:$HOME/lib/icons
This is a command separated list of path names.
Note, if you have multiple xterms or rxvt's and you want each to iconize to
something else, you may include the "-name" switch (e.g. xterm -name myterm) to
give that terminal its individual name, and configure (in the .fvwm2rc file) a
particular icon pixmap to be associated with a window bearing that name.
For example:
Style "xterm" Icon xterm-color_48x48.xpm
Will attach the pixmap "xterm-color_48x48.xpm" to an instance of xterm.
However, if you add the line
Style "Console" Icon myconsole.xpm
And run an xterm as such
$ xterm -name Console
When iconizing this terminal it will use the "myconsole.xpm" pixmap rather than
"xterm-color_48x48.xpm".
-----------
| Functions |
-----------
Functions are very useful in FVWM. They can make the configuration file more
readable, as well as avoid repetition of commonly issued commands (especially
useful with complex commands).
Some commands for working with functions:
* AddToFunc - adds a function definition.
(The following is the explanation given in the manual pages)
AddToFunc [name [I | M | C | H | D action]]
The letter before the action tells what kind of action triggers the command
which follows it.
* 'I' stands for "Immediate", and is executed as soon as the function is
invoked.
* 'M' stands for "Motion", i.e. if the user starts moving the mouse.
* 'C' stands for "Click", i.e., if the user presses and releases the mouse
button.
* 'H' stands for "Hold", i.e. if the user presses a mouse button and holds it
down for more than ClickTime milliseconds.
* 'D' stands for "Double-click".
The action 'I' causes an action to be performed on the button-press,
if the function is invoked with prior knowledge of which window
to act on.
* DestroyFunc
Delete a previously defined function.
* Exec - execute a shell command
e.g.
Exec exec xterm -name Console -title Console -C -fn 10x20
Launches a xterm terminal named Console.
Note, the second "exec" is the shell's execute command, and is recommended
by the FVWM man page to follow the Exec command.
An example of a function that executes the setxkbmap command to add the Greek
keyboard layout:
AddToFunc SetXkbmapUsGr "I" Exec setxkbmap -layout "us,gr" -option "grp:alt_shift_toggle"
The function was named "SetXkbmapUsGr".
This function can now be bound to the F10 function key as follows
Key F10 A N Function SetXkbmapUsGr
It can also be bound to a menu entry (see "Menus" subsection below.)
For more on functions see section "User Functions and Shell Commands" in man
pages.
------------
| Conditions |
------------
Conditions are a list of items separated by commas. They can be used to set
the application or set of applications to which a command should be applied to.
This is best illustrated with an example (taken from man page)
All ("XTerm|rxvt", !console) Iconify
Utilizing conditions, this command iconifies all the xterm and rxvt windows on
the current page, with the exception of the one named "console". The
parenthesis contains the list of conditions.
See "Conditions" section in man page for all sorts of conditions and how to
construct complex conditionals.
-------
| Menus |
-------
FVWM supports both simple and complex menu structures (such as nested menus.)
Various aspects of a menu are configurable (e.g. color, border width.)
A simple example of a menu whose purpose is the provide some Window Operations:
AddToMenu "WindowOps" "Window Ops" Title
+ "Restart" Restart
+ "Move" Function Move-or-Raise
+ "Resize" Function Resize-or-Raise
+ "Raise" Raise
+ "Lower" Lower
+ "(De)Iconify" Iconify
+ "(Un)Stick" Stick
+ "" Nop
+ "Destroy" Destroy
+ "Close" Close
+ "" Nop
+ "Refresh Screen" Refresh
+ "" Nop
+ "Goto Home Page" GotoPage 0 0
+ "Goto Page 0 1" GotoPage 0 1
+ "Goto Page 1 0" GotoPage 1 0
+ "Goto Page 1 1" GotoPage 1 1
+ "Moveto Home Page" MovetoPage 0 0
+ "Moveto Page 0 1" MovetoPage 0 1
+ "Moveto Page 1 0" MovetoPage 1 0
+ "Moveto Page 1 1" MovetoPage 1 1
+ "Go a page down" GotoPage +0p +1p
+ "Go a page right" GotoPage +1p +0p
When invoked, the menu will look like:
------------------
| Restart |
| Move |
| Resize |
| Raise |
| Lower |
| (De)Iconify |
| (Un)Stick |
|------------------
| Destory |
| Close |
|------------------
| Refresh Screen |
|------------------
| Goto Home Page |
| Goto Page 0 1 |
| Goto Page 1 0 |
| Goto Page 1 1 |
| Moveto Home Page |
| Moveto Page 0 1 |
| Moveto Page 1 0 |
| Moveto Page 1 1 |
| Go a Page down |
| Go a page right |
-----------------
The menu could be bound to a mouse button or a key stroke.
For example, to bind this menu to the F2 function key, include this line:
Key F2 A S Menu WindowOps
To bind it to the middle mouse button
Mouse 2 R N Popup "WindowOps"
See section "Menu" in man page for more (search for "Types of Menus").
Desktop Environment
********************************************************************************
* - Desktop Environment -
********************************************************************************
There is yet another layer in the graphical framework of a Unix like OS, which
sits on top of the window manager, and that is the desktop environment.
To read about "what's a desktop environment" and how it extends the desktop
experience beyond that of a window manager, follow this link.
Note, many users prefer using a window manager alone. However, the majority of
users would like a full desktop experience.
---------------
| GNOME Desktop |
---------------
GNOME (GNU Network Object Model Environment) offers a full desktop environment.
Some noteworthy comments about it:
* It is a feature rich desktop enviornment.
* GKT+ is GNOME's toolkit (GKT+ applications will adjust their theme
in accordance with GNOME's preferences.)
* Supports drag-and-drop feature in GNOME and KDE compliant apps.
* Use "Control Center" for most configuration requirements.
Other graphical tools and command line tools may be available for more
comprehensive tweaking of your system. For example, see gconftool-2 below.
* GNOME doesn't have its own built in window manager, but rather can be
configured to work with any number of window managers, some being more GNOME
compliant (e.g. Enlightenment, IceWM) and some less so (e.g. FVWM,
WindowMaker, SCWM.)
The default window manager in GNOME3 is Mutter. It used to be Metacity in
GNOME2.
* To modify the window manager used by GNOME, use one of the following methods:
1. Use the option given at the login screen provided by GDM.
2. Use Control Center (GNOME's settings application) to configure a window
manager to be used with GNOME.
Note, I have not seen this option provided in my current installation.
3. See this blog for a command line procedure.
Note, the desired window manager must be installed first.
For example, if you wish to use Metacity
$ sudo dnf install metacity # In Fedora
$ sudo apt-get install metacity # In Debian/Ubuntu
* To start GNOME without the session manager place "exec gnome-session"
as the last entry in .xinitrc
Comment out any invocation of a window manager.
* To start GNOME with the session manager place "exec gnome-wm"
as the last entry in .xinitrc
Comment out any invocation of a window manager.
In Fedora:
* If not already installed
$ dnf install @gnome-desktop-environment
* To get a list of environment groups (which includes desktop environments
like GNOME, KDE, etc.)
$ dnf grouplist -v
To tweak settings from the command line, use gconftool-2:
Install gconftool-2 if not already installed (In Fedora package name is
GConf2).
Examples of modifying some focus related settings:
$ gconftool-2 --type string --set /apps/metacity/general/focus_mode mouse
$ gconftool-2 --type boolean --set /apps/metacity/general/auto_raise true
$ gconftool-2 --type string --set /apps/metacity/general/auto_raise_delay 600
To print subdirectories under /apps
$ gconftool-2 --all-dirs /apps
To print all subdirectories and entries under /apps
$ gconftool-2 -R /apps
Refer to manpage for more.
$ man gconftool-2
Another command line tool to tweak settings is gsettings.
Gsettings is a high level API for managing application settings, and gsettings
is the command line interface to it.
See here and here for more about the API.
For general usage of the command line tool read the man page
$ man gsettings
For an example see section on Laptop.
To tweak settings from using a GUI interface, install the dconf-editor.
In Fedora install dconf-editor....
In Archlinux and Ubuntu install dconf-tools.
To tweak focus related settings, navigate to
org : gnome : desktop : wm : preferences
Settings such as focus-mode, auto-raise, and auto-raise-delay can be modified
so that windows are raised automatically as the mouse is placed over them
(without the need to click the mouse).
Note, in Fedora 33 changing the focus method with gconftool-2 as described
above did nothing, but using dconf-editor worked.
For applications without a title bar press ALT-space over the application's
window to bring up the title bar menu (minimize, maximize, always on top, etc.)
-----------------------
| GNOME Troubleshooting |
-----------------------
If running the Gnome desktop and it freezes, try the following:
Press CTRL-ALT F1 - should display login window.
If that doesn't work then:
Press CTRL-ALT F3 (up to F12) to put yourself in a virtual console.
Login as root. At the shell prompt type:
$ top
Look for processes called "gnome-shell". Its PID should appear on the right.
Kill the process
$ kill -9 thePID
Return to desktop by pressing CTRL-ALT F2
The middle mouse button is disabled in GNOME in Fedora 28.
In order to enable it, installed gnome-tweaks
$ dnf install gnome-tweak-tool
Launch and select "Keyboard and Mouse" settings.
Disable "Mouse Emulation" (bottom).
For middle mouse paste functionality turn on "middle click paste".
-----
| KDE |
-----
* Is a feature rich desktop enviornment.
* It's tied down to its own built-in window manager.
* To configure settings use "KDE Control Center".
kwikdisk is a KDE applet to display available file devices, including
partitions, CD drives and more. It also displays information on their free
space, type and mount point. It also allows mounting and unmounting of drives.
I don't have much personal experience with it, so that's about as much as I'll
say about it.
------
| LXDE |
------
See separate section on LXDE.
LXDE
********************************************************************************
* - LXDE -
********************************************************************************
LXDE is a full feature desktop environment. It is a lightweight and efficient
desktop enviornment based on openbox, making it ideal for slower and/or
resource strapped computers.
LXDE is not a single software package, but rather works with components.
Not all components need to be installed and not all components were
developed by the LXDE team.
The main components are:
* PCMan File Manager - A file manager program (replaces Nautilus).
* LXDM - Default X display manager for login and session setup.
* SDDM - Simple desktop display manager. An alternative to LXDM.
* Openbox - The default window manager (can also be configured to work with
Fluxbox or Xfwm).
obconf is a graphical tool for configuring openbox.
* LXPanel - Desktop panel
* LXSession - X session manager
Other components used in LXDE:
* LXNM - GUI network connection utility
* LXLauncher - Application launcher
* LXInput - Mouse and keyboard configuration tool
* GPicView - Image viewer
* LXMusic - Audio player frontend for XMMS2
* LXTerminal - Terminal emulator
* LXTask - Task manager
* LXRandR - Gui to randr
------------------
| Configuration |
------------------
* The first thing that you might want to configure is the display manager
(see section Display Manager for more.)
- If not using a display manager, you can launch LXDE from the .xinitrc file
(read by startx) by including the line "exec startlxde".
- If you are not interested in running the desktop environment, rather
only lxpanel, then include the line "exec lxpanel" and other
commands including one that launches a window manager (e.g. fvwm) in your
.xinitrc file.
* The system level file /etc/xdg/lxsession/LXDE/desktop.conf
and user level file ~/.config/lxsession/LXDE/desktop.conf
can be used to configure your choice of default window manager
amongst other things.
* The system level file /etc/xdg/lxsession/LXDE/autostart
and user level file ~/.config/lxsession/LXDE/autostart
can be used to configure applications that launch at start time.
At the least lxpanel should be launched in this way.
* If you require multiple keyboard layouts and the ability to easily toggle
between them, then use one of these methods:
(I) Add the "Keyboard Layout Handler" applet to the panel in the desired
place. Use the applet to add and/or remove layouts.
(II) Place the following line in ~/.config/lxsession/LXDE/autostart
@setxkbmap -layout "us,fr" -option "grp:alt_shift_toggle"
Substitute for "fr" the layout you wish to add. For more layouts,
simply add the layout codes separated by commas. E.g. "us,fr,gr".
Alternatively look in file ~/.config/lxpanel/LXDE/panels/panel (if it
exists) for a line
Plugin type=xkb
Modify LayoutsList=us,il
Layout stuff appears in /usr/share/lxpanel/xkeyboardconfig
To configure keyboard for multiple layouts also refer to subsection Keyboard
in section X-Windows.
-----------------
| troubleshooting |
-----------------
If having a problem with LXDE panel menus, or setting applications in
application launcher after having reconfigured user UID's, try rebooting.
Messengers
********************************************************************************
* - Messengers -
********************************************************************************
------------------
| Popup messengers |
------------------
The following utilities provide options for popping up messages on the screen
of your own computer, or other computers on the network.
boxes - A utility to place input text in a box (various formatting
options available)
e.g.
$ echo "Hello" | boxes
zenity - A utility that pops up various GTK+ dialog boxes with messages
on screen (see man page for usage and options)
notify-send - Puts up a window having a specified title and message
content.
$ notify-send "Title" "Message"
dialog - Text based dialog (see man page for usage)
----------------------
| LAN Messengers (P2P) |
----------------------
Some apps for Windows:
* Squiggle
* BeeBEEP
* TopChat
* LAN Messenger
* Mossawir LAN Messenger
* netsend
* winpopup (see www.winpopup.net/win-popup.html)
Red Hat Linux
********************************************************************************
* - Red Hat Linux -
********************************************************************************
Note, this section is very outdated. It was written before Fedora spawned off
of Red Hat, and as such may of little relevance.
Some tools and keyboard shortcuts used with Red Hat Linux:
* linuxconf ............. Configure Linux
* sendmailconf .......... Configure sendmail
* sndconfig ............. Configure sound driver
* sound-properties ...... Configure sound properties (e.g. closing window, etc.)
* Xconfigurator ......... Reconfigure Linux (only from root)
* xconf ................. ?
* sendmail .............. Basic mail transfer program that other mail utils use
* fetchmail ............. Program to get mail -- useful if run as a daemon
* Ctrl-Alt-Backspace .... Exit out of X-windows
* Ctrl-Alt-Delete ....... Shutdown Linux
* junkbuster ............ A proxy program to filter out ads from browser
* rhlibrary ............. A program to search Linux libraries found on a CDROM
???Directories: /usr/lib/rhs/doccd, /usr/bin
Package configuration program: system-config-packages
This configuration program is very useful in that it checks for all
dependencies of the packages you wish to install.
It also works with categories, making programs easy to find.
Debian/Ubuntu
********************************************************************************
* - Debian/Ubuntu -
********************************************************************************
Debian and one of its derivatives, Ubuntu, are probably the most popular Linux
distributions.
--------------------
| Package Management |
--------------------
Package management is done with the aptitude suite of commands.
The main command in this suite is apt-get.
Some commonly used invocations:
apt-get install pkg ................... Install "pkg"
apt-get update ........................ Update repositories
apt-cache show pkg .................... Shows all versions of pkg and description of package
apt-cache search keyword .............. Searches for all packages with 'keyword'
add-apt-repository name ............... Adds a repository (see man page)
gdebi ................................. Works like apt-get but locally
apt ................................... High level command-line interface to the package management system
Refer to the man pages for more about these commands.
To install a standalone deb package use dpkg (similar to Red hat's rpm command)
$ dpkg -i package-name.deb
The file /etc/apt/sources.list contains a repository list.
Add "contrib" and "non-free" after "main" if need to access packages
from the contrib and non-free sections of the repository.
------
| Misc |
------
To obtain ubuntu version
$ cat /etc/lsb-release
or
$ lsb-release -a
ArchLinux
Some troubleshooting advice:
Sometimes after an upgrade or change of hardware (or virtual hardware)
something goes wrong and the system doesn't boot up properly.
This could be a problem with the initramfs and or grub.cfg files
(e.g. a root partition's uuid or drive # changed)
In such a case do as follows:
Step 1. bring up system with a recent arch iso image.
Step 2. mount root partition
$ mount /dev/sdaX /mnt (substitute for X in accordance with your setup)
Step 3. change into arch root
$ arch-chroot /mnt
Step 4. mount /boot
$ mount /dev/sdaY /boot
Step 5. update system
$ pacman -Syu
Step 6. regenerate initramfs
$ mkinitcpio -p linux
Step 7. regenerate grub configuration file (grub.cfg)
$ grub-mkconfig -o /boot/grub/grub.cfg
Reboot.
Antix
********************************************************************************
* - Antix -
********************************************************************************
Antix is a distro derived from Debian. It is especially good for resource strapped
computers, although it works very well on any PC.
Note, Antix does not use systemd at all, but rather SysV type init scripts.
Use the commands service and update-rd.c to configure services.
Antix' control center can be used to configure most settings, including
networking, and the desktop manager. It is accessible via Antix' menu.
Minix
********************************************************************************
* - Minix -
********************************************************************************
Minix is a small Unix like OS with a tiny microkernel, created by Andrew
S. Tanenbaum. It was originally developed by Tanenbaum for educational
purposes, but the latest version, Minix 3, is a fully functional OS with package
management via a repository especially suitable for resource strapped systems
and embedded computers, as well as for hobbyists.
For more refer to:
* The Minix 3 Website
* Wikipedia article on Minix
--------------------
| Package Management |
--------------------
pkgin - package management application via internet
pkgin_cd - package management application via CDROM
Mounting CDROM:
$ mount -r /dev/c0d2p2 /mnt
RPM
********************************************************************************
* - RPM -
********************************************************************************
rpm (Red Hat Package Manager) is a package manager program for Red Hat
based systems (RHEL, CentOS, Fedora). A RPM package is an archive of one or
more files. It is identified by the .rpm extension (e.g. foo.rpm).
The files in a RPM package may be binaries, source code, helper scripts, file
attributes and descriptive information about the package.
The most common operations are installation
$ rpm -i pkg.rpm
and removal of a package
$ rpm -e pkg
Some other invocations:
rpm -qpil pkg.rpm ....... Provides info on package including all files installed
rpm -ql pkg.rpm ......... Lists files that will be installed
rpm -qpR pkg.rpm ........ Lists dependencies for the specified pkg
rpm -qpil pkg ........... Lists files (with paths) that will be installed
rpm -i --test pkg.rpm ... Tests if pkg.rpm can be installed
rpm -e --allmatches pkg . Uninstalls all versions of pkg
rpm -qa ................. Query (list) all installed packages
See man page for a complete description of this command with all its options.
Note, all dependencies must be met for an rpm package to install. To override
dependency checking (not recommended under normal circumstances) use the
--no-deps option.
The rpm command works with standalone packages, and not with a repository.
Therefore, it is not very easy to install packages with complex dependencies.
As such, it is recommended to use instead dnf or yum (package managers linked to
repositories).
system-config-packages is a GUI wrapper to rpm simplifying the
installation of packages from Redhat/Fedora disks (seems to be no longer
available).
dnf
********************************************************************************
* - dnf -
********************************************************************************
dnf is a package manager for rpm based Linux distributions.
It is intended to supercede yum.
Note, in Fedora yum is depcriated, and if invoked is handled by the dnf
executable. There are, however, rpm based distributions that still use yum.
For complete details refer to the man page, or invoke dnf without arguments.
Some common invocations are listed below.
* Install package "pkg"
$ dnf install pkg
pkg can have a specific version specified, or a particular architecture
(e.g. i686, x86_64) (see man page for more)
* Install rpm package
$ dnf install pkg.rpm
Note, this invocation will also attempt to install all package dependencies
from the repository (unlike the rpm command which doesn't link to a
repository.)
* Remove a package
$ dnf remove pkg
* Search the dnf repository for packages containing keyword
$ dnf search keyword
* Give a description of pkg
$ dnf info pkg
* Perform various cleanup tasks (see man page)
$ dnf clean ...
$ dnf clean packages - removed cached packages
* Lists all packages
$ dnf list all # Lists all available and installed packages
$ dnf list available # Lists all available packages
$ dnf list installed # Lists all installed packages
* Lists installed repositories
$ dnf repolist
To list files in a given package (see link):
$ dnf repoquery -l packagename
--------------
| Repositories |
--------------
A given Linux distribution comes with default repositories. For example Fedora
comes with
* fedora.repo
* fedora-updates.repo
Other repositories may be added to supplement what's available in the default
repositories.
A good example of this is as follows.
Fedora's policy is not to include software that is patent encoumbered in the US
in their repositories. For instance a music player that ships with Fedora will
not play mp3 format audio because the mp3 algorithm is patented.
The rpmfusion repositories supplement Fedora's software bundle by including
software that will not ship with Fedora for legal reasons.
* rpmfusion-free.repo
* rpmfusion-nonfree.repo
Therefore the same music player, if downloaded from rpmfusion will play mp3
formatted audio.
Once the rpmfusion repository is installed the dnf command will automatically
download the music player and accompanying libraries from rpmfusion, so that it
will be able to play mp3 audio or other restricted formats.
See RPM Fusion founding principles for the difference between free and nonfree.
Note, repositories of this sort are based in countries that don't reconize
software patents.
For dnf to access a repository the corresponding configuration file must be
placed in the /etc/yum-repos.d directory (e.g. fedora.repo,
rpmfusion-free.repo). Usually a given repository will have the file available
on its website. Alternatively the website may include a rpm package that when
installed will place the file in the correct location and enable the repository.
Once a repository's configuration file is placed in /etc/yum-repos.d, the
repository can be disabled or enabled.
To disable a repository:
$ sudo dnf config-manager --set-disabled repository
To enable a repository:
$ sudo dnf config-manager --set-enabled repository
Substitute for the desired repository.
------------
| Files/dirs |
------------
* Global configuration file
/etc/dnf/dnf.conf
Refer to man page for a description of this file
$ man dnf.conf
* Repository specifications are located in directory
/etc/yum-repos.d
When adding repositories this is where their configuration file goes.
* Directory containing various dnf related cached content
/var/cache/dnf
------------------------
| Installing source code |
------------------------
To install source code follow these steps (see link):
First install dnf-utils/yum-utils
To download source code in Fedora you can use:
$ yumdownloader --source package_name
This will automatically install/enable the source repositories
and download the source package.
You should be able to find it under ~/rpmbuild/SOURCES/
use unxz (or xz --decompress) to decompress the package.tar.xz file.
---------------------------
| Upgrading Fedora with dnf |
---------------------------
Warning: Fedora's upgrading ability is constantly improving. However,
beware that if an upgrade fails this may leave the system unusable. Be sure to
backup home directories and critical data before proceeding.
To upgrade Fedora follow these steps (see link):
Step 1. Instruct to upgrade via dnf
$ sudo dnf upgrade --refresh
Reboot computer
Step 2. Install dnf-plugin-system-upgrade package if it is not currently
installed.
$ sudo dnf install dnf-plugin-system-upgrade
Step 3. Download updated packages
$ sudo dnf system-upgrade download --refresh --releasever=31
(substitute 31 for latest stable release, or whatever release you would like)
Step 4. Dependencies
Unsatisfied dependencies (due to say installed software form third-party
repositories) will require --allowerasing option.
Step 5.
$ sudo dnf system-upgrade reboot
pacman
********************************************************************************
* - pacman -
********************************************************************************
pacman is a powerful package managment utility used by Archlinux.
Archlinux is a rolling distribution, meaning open source software packages in
the repository are constantly being updated to their latest stable release. In
contrast, most Linux distributions may lock in the software on their repository
to a particular version without updating it till a new version of the distro
releases. As such, with Archlinux it is important to constantly update the
system (about once a week). This is important for two reasons:
* Having the benefit of having the latest software release.
* Even more so it is important for the following reason -- infrequent upgrades
may break the system. For instance if the system hasn't been upgraded in a
few months, the changes that have transpired between the previous upgrade and
the present upgrade may be too complex to be resolved automatically by the
package manager. This is especially true if a major change took place in the
interm.
If things are not working correctly after an upgrade, check the Archlinux
website itself, as well as forums for possible fixes. The more frequently
you update the system, the easier it is to fix things. Note, if you udpate
frequently then most of the time the update goes smoothly and there is nothing
to fix.
Some commonly used invocations of pacman are presented.
System updates should be performed regularly, via
$ pacman -Syu
or
$ pacman -Syyu
(double "y" forces an update on all packages even if pacman thinks none is
needed)
Pacman keeps all downloaded packages in /var/cache/pacman/pkg.
Some "cleaning" maintenance is necessary every once in a while to keep the
cache from growing too much. Here are some command invocations to accomplish
this.
* To clean the pacman pkg cache of all uninstalled packages
$ pacman -Sc
* To completely empty the pacman pkg cache (not recommended according to
Archwiki)
$ pacman -Scc
* To remove all cached versions (multiple upgrades of a package) of each package
except most recent three
$ paccache -r
* To also check for uninstalled versions and remove them
$ paccache -ruk0
There is of course also the brute force way of removing everything from
/var/cache/pacman/pkg. Use with care!
On some Archlinux installations I have made /var/cache/pacman/pkg its own
partition, so if the cache grows to fill up available space on the partition,
the / directory or /var directory will not be affected since they are on a
different partition.
Installing and removing packages:
* To install a package
$ pacman -S pkgname
* To remove a package
$ pacman -R pkgname
* To forcibly remove a package (useful if you know you can do without it,
but can't remove it normally because some other package depends on it)
$ pacman -Rdd pkgname
Searching:
* To query all packages
$ pacman -Q
* To query a specific package
$ pacman -Q pkgname
* To search for a package containing a file
$ pkgfile filename
----------------
| Pacman Keyring |
----------------
Pacman uses GnuPG keys to determine if packages are authentic (see this
Archwiki article for more about it.)
If having problems with keys try installing updated keyring prior to installing
other packages:
$ pacman -S archlinux-keyring
or perform a system upgrade.
Can also try reinitializing keys
$ pacman-key --init
$ pacman-key --populate archlinux
Can also try installing updated keyring prior to installing other packages
or performing a system upgrade
$ pacman -S archlinux-keyring
----------------------------
| Additional Troubleshooting |
----------------------------
If you get a message "/etc/ssl/certs/ca-certificates.crt exists"
try deleting the file or renaming it.
If you get a message "unable to lock database", try deleting the file
/var/lib/pacman/db.lck
This message could result from when a pacman session was interrupted in the
middle with a system shutdown, or similar.
If attempting to perform a system upgrade and you are told "there is nothing
to do", and you know that is not the case, the problem is likely an outdated
mirrorlist, in which case you should do as follows (see this webpage):
* Back up current mirrorlist
$ sudo cp /etc/pacman.d/mirrorlist /etc/pacman.d/mirrorlist.bak
* Use reflector utility to update mirrorlist (options specify to select 50
mirrors with fastest rate)
$ sudo reflector --verbose -l 50 -p http --sort rate --save /etc/pacman.d/mirrorlist
Note, its probably a good idea to install reflector before you arrive at this
situation.
-------------------------
| Build packages from AUR |
-------------------------
AUR is Arch Linux's User repository.
It contains packages that are not currently available in the main repositories
(i.e. core, extras, community).
However, the packages are not directly installed by pacman.
They must first be built. The build process is fairly simply.
* First make sure you have base-devel packages installed
$ pacman -S --needed base-devel
* Make sure you have sudo permissions (edit sudoers file to give
permission to run pacman with any arguments)
* Go to the AUR site and search for your package.
* Enter its webpage and click get snapshot of pkg build (on upper right of page)
* Save file in a designated build directory (e.g. ~/pkgbuild)
* untar (it will create a directory with the package name where the files
will be placed)
$ tar -xvf foo.tar.gz
* Change into foo directory and run makepkg (as non-root)
$ cd ~/pkgbuild/foo
$ makepkg -sri
Note, you need to have sudo permissions for pacman. To do so enter
the following line in sudoers:
jdoe ALL=NOPASSWD: /usr/bin/pacman *
Subsitute your user name for jdoe
Alternatively you can require sudo to prompt for a password (more secure)
jdoe ALL=PASSWD: /usr/bin/pacman *
* After downloading files, resolving dependencies, compiling and packaging the
build, the makepkg command will attempt to install it. Since installation
requires root privelages it will do so using sudo.
* If you were unprepared and didn't give yourself sudo permissions, as suggested,
then simply login as root and install package:
$ /usr/bin/pacman -U ~/pkgbuild/foo/foo-verno-x86_64.pkg.tar.xz
* Try running the program or programs.
Note:
If having certificate problems, you can edit "/etc/makepkg.conf"
Look for DLAGENTS=(... in beginning of config file.
This file contains the downloading agents to be used for different protocols.
For websites or ftp sites /usr/bin/curl is used.
Add -k option to 'https::/usr/bin/curl -fLC - ..."
That is: https::/usr/bin/curl -k -fLC - ...
The -k option will instruct curl to run in insecure mode.
That is it will not check for valid certificates.
Of course, do this only if you are using an internet proxy or filter that is
causing problems with https and certificates, but you trust the site you
are dealing with.
DOS Utils
********************************************************************************
* - DOS Utils -
********************************************************************************
DOS utils is a collections of unix utilities to manipulate DOS files on a DOS
filing system. Here you'll find some basic examples of using these tools.
For a more comprehensive treatment of these tools see man page
$ man mtools
or man page for a specific mtool command, e.g.
$ man mcd
Examples of basic usage:
* Change DOS attribute flags
$ mattrib filename
* Check floppy for bad blocks
$ mbadblocks a:
* Change mtools working directory on DOS disk
$ mcd directory
* Copies file1 from floppy drive (DOS format) to current Unix directory
where it is renamed file2
$ mcopy a:file1 file2
* Delete files
$ mdel file
* Recursively removes directory, files, and subdirectories
$ mdeltree directory
* Display directory
$ mdir file_or_dir
* List space occupied by a directory (similar to Unix' du command)
$ mdu file_or_dir
* Formats floppy disk
$ mformat a:
* Create a shell script (packing list) to restore unix filenames
$ mkmanifest [files]
* Prints parameters of a DOS filesystem
$ minfo
* Create a volume label to a disk
$ mlabel a:[new_label]
* Make a directory
$ mmd directoryname
* Mounts a DOS disk drive
$ mmount msdosdrive [mountargs]
* Creates a DOS filesystem as partitions
$ mpartition
Note, this command is not intended for Linux systems where fdisk is available
to do the job.
* Remove directory
$ mrd directory
* Rename or move an existing DOS file or directory
$ mren sourcefile targetfile
* Display fat entries for a file
$ mshowfat
* Test mtools configuration files
$ mtoolstest
* Display contents of a DOS file
$ mtype
* Issue ZIP disk specific commands on Solaris or HPUX
$ mzip
MS-Windows
********************************************************************************
* - MS-Windows -
********************************************************************************
This section covers a few specific items that were relevant to me at one point
or another with relation to MS-Windows.
------------------------------------------
| Directory structure of MS-Windows system |
------------------------------------------
(This section is incomplete)
c:\windows\system32
...
------------------------------------
| Linux utilities for Microsoft Word |
------------------------------------
wvAbw (1) - Convert msword documents to Abiword's format
wvCleanLatex (1) - Convert msword documents to LaTeX
wvDVI (1) - Convert msword documents to DVI
wvHtml (1) - Convert msword documents to HTML4.0
wvLatex (1) - Convert msword documents to LaTeX
wvMime (1) - View MSWord documents
wvPDF (1) - convert msword documents to PDF
wvPS (1) - convert msword documents to PS
wvRTF (1) - convert msword documents to RTF
wvSummary (1) - View word document's summary info
wvText (1) - Convert msword documents to text
wvVersion (1) - View word document's version #
wvWare (1) - Convert msword documents
wvWml (1) - Convert msword documents to WML
-------------------
| Terminal commands |
-------------------
$ xcopy sourcedirectory targetdirectory
--------------------
| Network and Shares |
--------------------
* To map a network drive:
In command prompt invoke
$ net use x: \\vboxsvr\myfolder
This commands maps a virtualbox shared folder called "myfolder" onto drive X:
Map a samba share located at IP address 10.0.0.5 onto drive Y:
$ net use y: \\10.0.0.5\share
Can also use GUI to map drives.
------
| Misc |
------
* To get activation code in Win10 using command prompt:
$ wmic path softwarelicensingservice get OA3xOriginalProductKey
Can also download activation finder program from internet.
---------------------------------------------
| Managing and repairing Windows installation |
---------------------------------------------
Use Sergei Strelec's WinPE boot and repair CDROM image to administer and fix
many issues that may arise with your Windows installation.
It includes a miriad of applications to handle such things as resetting
passwords, managing accounts, partition and data recovery, and much more.
It can be downloaded from Strelec WinPE.
For Sergei's website click here.
--------------------
| Forgotten Password |
--------------------
The following method was tested for Windows 7.
Open Windows main partition from a Linux installation.
Make a copy of c:\windows\system32\Utilman.exe
(Utilman is the a utility program that can be run to set various
settings in MS-Windows without being logged in).
Make a copy of c:\windows\system32\cmd.exe and rename to Utilman.exe
Note: this poses a security risk since Utilman can be run by anyone from the
login screen, which runs it in privileged mode.
However, this is just what is needed for resetting a password.
Boot Windows and open Utilman which is really cmd.exe
$ cd c:\windows\system32
At prompot type:
$ compmgmt.msc
Select Local User and Groups to reset password.
If compmgmt.msc doesn't have the plugin for modifying user accounts then
use the net user command. For instance to modify a user account type:
$ net user "user name" newpassword
e.g.
$ net user jdoe jdoepasswd
If the user profile for your account is not loading, thus preventing you from
entering your account, then it needs to be fixed.
Do as above making Utilman be cmd.exe
Run cmd.exe (as you would Utilman) and launch regedit from the command line:
$ c:\windows\system32\reged*.exe
Note, you are now running regedit with administrative privileges so you will
be able to make changes to the registry.
Follow the instructions on this Microsoft's support webpage.
Also see subsection above "Managing and repairing Windows installation".
Tmux
********************************************************************************
* - Tmux -
********************************************************************************
Tmux allows multiple terminals to be controlled from a single terminal
screen. When invoked, it takes over the terminal from which it was invoked.
You can then split it into multiple panes, and perform various manipulations.
To launch, type (within whatever terminal you wish to work in)
$ tmux
To issue commands to tmux enter Ctrl-b followed by desired command shortcut
(see manual page for a full list of commands)
Some useful commands:
$ - Rename the current session
, - Rename the current window (to rename a pane see further down)
% - Split the current pane into two, left and right
" - Split the current pane into two, top and bottom
c - Create a new window
o - Select the next pane in the current window
m - Mark the current pane
n - Change to the next window
q - Briefly display pane indexes
0-9 - Select windows 0 to 9
! - Break the current pane out of the window
& - Kill the current window
: - enter a command to run
x - Kill the current pane
{ - Swap the current pane with the previous pane
} - Swap the current pane with the next pane
M - Clear the marked pane
f - Prompt to search for text in open windows
[ - open copy mode (from which you can copy text into a paste buffer)
] - paste most recent paste buffer
# - List all paste buffer
Up/Down/Left/Right arrow keys - move from one pane to another
Ctrl-Up, Ctrl-Down, Ctrl-Left, Ctrl-Right - Resize the current pane in steps of one cell.
Meta-Up, Meta-Down, Meta-Left, Meta-Right - Resize the current pane in steps of five cells
Tmux uses a server client model.
A single tmux server manages all tmux clients and communicates with them
through a socket in the /tmp directory.
The client can be an xterm or a gnome terminal, or similar.
The server is launched when the first tmux session is launched and terminates
when the last session is closed.
A session has one or more windows linked to it, and each window may have one
or more panes in it.
For example, say you open two xterm terminals, and you launch a tmux
session in each one. You now have two open tmux sessions.
Within one of the sessions you create three windows. Only one window will be
active (visible). You can switch between which window is active using tmux
commands or the relevant keyboard shortcuts.
You can split a window into multiple panes. The window may be split
horizontally or vertically or both.
Commands may be issued to tmux sessions, windows or panes using the same
tmux command you used to launched tmux.
For example, to turn off the status bar issue
$ tmux set-option status off
To turn on the status bar
$ tmux set-option status on
To display a 3 line status bar issue
$ tmux set-option status 3
To rename the current session to "mysession"
$ tmux rename-session manual
"current" refers to the session from which you launch the command.
You can use the "-t" option to specify the session you wish to modify.
To create a new window and name it "mywindow"
$ tmux new-window -n mywindow
To rename the current window to "mywin"
$ tmux rename-window mywin
To split the window whose name is mywindow and set its height to 10 lines
$ tmux split-window -t mywindow -l 10
On the status bar at the bottom, the session name appears on the left in
brackets.
The windows within the session appear next to it, numbered 0 and onward.
An asterisk appears next to the active (visible) window.
To rename the current pane, type within the commandline (substitute your
own title for "My Title")
$ printf '\033]2;My Title\033\\'
You can detach a tmux pane from one session and attach it to another. See
the man page for more about this useful feature.
tmux provides a feature whereby you can copy text from anywhere in the
terminal into a paste buffer.
To enter copy mode, press Ctrl-B [
You may now scroll through the content, up and down left and right.
When reaching the starting point of the text you wish to copy into a paste
buffer press the space bar.
Maneuver the cursor to encompass the desired selection and press return.
Pressing return takes you out of copy mode.
You may now paste the text by pressing Ctrl-B ]
To list all paste buffers, press Ctrl-B #
The default system-wide tmux configuration file is /etc/tmux.conf.
The default user configuration file is ~/.tmux.conf.
The conf file allows you to configure tmux's behavior to suit your needs and
preferences.
For example, the following lines will set the color of the status bar to
red, and cause tmux to run bash whenever a new pane is created.
set-option -g status-style bg=red
set-option -g default-command bash
Tmux is actually a far more sophisticated program than would appear from my
cursory description of it. I highly recommend loooking at the man page.
$ man tmux
********************************************************************************
* - Sound & Multimedia -
********************************************************************************
A computer stores and reproduces sound digitally. This means that all sound
stored and processed in a computer is represented as digital samples. These
digital samples can be obtained through a sampling process from an analog
audio source, such as a microphone, or synthesized internally.
There are a number of ways to sample an analog signal, with pulse-code
modulation (PCM) being the most common method used in sound cards. The basic
idea is to capture the signal amplitude at a given time instant and quantize
it, that is, round it to the nearest quantization level defined by the
quantization scheme. This process is repeated at regular time intervals. The
time interval in between samples is called the sampling period, and one over
that is called the sampling rate. The sampling rate has the same units as
frequency (i.e. Hertz, kHz).
To illustrate quantization, consider a measurement of 1.573923235 volts taken at
a certain instant at the output of a microphone. This measurement is a single
sample. To store this sample in computer memory it must first be represented
digitally. A four byte floating point number will surely provide sufficient
accuracy. However, often a lesser degree of accuracy is sufficient for
reproducing the recorded sound with "good" quality. In a linear quantization
scheme, a voltage interval (say -5 volt to +5 volt) is partitioned into a
certain number of equaly spaced voltage levels. 8-bit quantization accomodates
256 levels (2^8 = 256). With this scheme the above number will be stored as
1.5625 resulting in a rounding error of 0.011423. A 16-bit scheme accomodates
2^16=16536 levels, resulting in less rounding error (on the average). Any
voltage outside the ±5 volt interval will be truncated, and any voltage
inside the interval will be rounded off to the nearest valid level in the
quantizing scheme. Some quantization schemes (e.g. mu-law) partition the
permitted voltage range in a non-linear fashion.
Two important parameters in representing a sound digitially are the sampling
rate and bits per sample. For example a CD quality soundtrack is sampled
at 44.1 KHz (44,100 samples per second) at 16 bits (two bytes) per sample.
This is what gives CD audio high fidelity. With stereo two such tracks are
present.
---------------
| Audio formats |
---------------
Many audio formats have been developed for storing sound digitally. The
simplest is to store the sound as raw samples, in which case, to reproduce the
sound it is necessary to know the sampling rate, bits per sample and
quantization scheme.
Most formats use "containers" to store information about the coding as well as
meta data (e.g. Author, Artist, Track title, Cover art, etc.). Such formats
are really container formats.
Many formats apply compression. Of those that use compression, some apply
lossless compression (i.e. there is no loss of information when
restored), and some apply lossy compression where some acceptable
level of audio degradation is tolerated in exchange for additional compression.
Whatever the format, to make use it a codec is required.
A codec (short for coder-decoder) is the software that effects a particular
translation scheme for decoding and displaying video and/or audio data. Many
audio format are really container formats, some of which are designed to work
with a specific codec, whereas others can accomodate multiple codecs.
Refer to this Wikipedia link for a list of codecs.
The following is a partial list of commonly used audio formats.
* Uncompressed audio
* WAV
An IBM/Microsoft format that is still in wide used, especially with MS
Windows. File extension is .wav
* AIFF
A format developed by Apple and is commonly used on the Apple Macintosh.
A variant is used in MAC OS systems.
File extension is .aiff or .aif
A variant with copression may have file extension .aifc but may have the
same extension as uncompressed AIFF.
* AU
Originally introduced by Sun Microsystems, the standard has evolved
to support different encoding formats. This format is associated with Unix.
File extension is .au
* Raw PCM
A headless format with PCM encoded data. Since there is no header, the
sampling rate and number of bits in quantization must be known to reproduce.
* Lossless compression
* FLAC
Being a lossless format it is often used when ripping CDs, saving around
50% or more in storage. File extension is .flac
* WMA Lossless
A windows multimedia format. File extension is .wma
A lossy variant with the same file extension exists.
* Lossy compression
* MP3
This format was developed as part of the MPEG standard. It is probably the
most common audio format for distributing and storing audio files.
File extension is .mp3
* Vorbis
A popular patent unencumbered lossy audio format. Used in Wipipedia to
present audio clips. Is recommended by Linux distributions that purposely
omit from their repositories software that processes patent encumbered
formats (e.g. Fedora). File extension is .ogg
On the technical side, this is a file format that can multiplex a number of
separate independent open source codecs for audio, video and text.
Ogg commonly refers to the ogg-vorbis audio file format.
Other common codecs that ogg accepts are theora (a video codec)
and speex (a human voice codec).
The text streaming cabability is useful for subtitles.
* AAC
A format designed to succeed mp3.
File extensions .m4a, .mp4, .3gp
See this Wikipedia article for more about audio file formats.
-----------------------
| Sound system in Linux |
-----------------------
In Linux the sound system is normally managed in three layers:
* Driver and interface layer
This layer drives the sound hardware, and is usually resident in the kernel.
In Linux two systems are available:
* ALSA (see section on ALSA). This is the default.
* OSS (Open Sound System) is an alternative system, which was present in
earlier versions of Linux. It is also available for other Unix like OSs.
* Sound server
This layer sits on top of ALSA or OSS, providing the capability to manage
sound streams coming from the desktop environment and different applications.
It offers applications and the user a unified sound interface.
From a user's perspective, there is no need to learn and interact with the
specific interface of his sound card.
The desktop and applications send their sound streams to the sound server
rather than directly to a sound card driver. Volume control and various
mixing functions are managed by the sound server rather by sound card specific
interfaces.
In Linux a choice of sound servers are available:
* PulseAudio (discussed in more detail in section Pulse Audio)
This is the most commonly used sound server, and is the default server for
many desktop environments. Besides the standard functions expected of a
sound server it support network streaming (i.e. playing sound from one
computer on another's sound hardware).
* JACK
This is an older sound server system targeted more for professional audio
applications. It features low-latency recording, playback, effects and
more. JACK2 targets multi-processor systems.
* NAS
* Application Layer
In general, applications that produce or record audio interface with the
sound server rather than with ALSA. If no sound server is available many
applications know how to access ALSA directly, although that is not always
the case (e.g. Skype). Sound editing software such as Audacity provide the
option of accessing ALSA directly.
Take note that interfacing with ALSA directly to access a given sound card
prevents the sound server from gaining access to it. Furthermore, there
are applications which don't support a sound server and will rely strictly on
ALSA. Such applications will grab control of the sound card from the sound
server. Refer to this Archwiki webpage for a way around this when PulseAudio
is the active sound server.
-------------------------
| Why use a sound server? |
-------------------------
In principle it is possible to manage without a sound server, and have
applications access the sound hardware directly through ALSA. However, here
are a few ways a sound server simplifies things:
* Mixing
The sound server automatically mixes audio streams from different applications
and desktop components. If no sound server is running, an application
streaming sound to a sound card device locks access to that device, meaning
other applications can't use it untill it's released. For instance, if
playing music using a music player, audio notifications from the desktop will
be left unheard.
Note: technically a sound server is not necessary for mixing sounds. ALSA
itself allows for the creation of a mixing device so that applications
can direct their sound output to ALSA's mixing device rather than directly
to a sound card, but this is not as seamless and transparent to the user
as when relying on a sound server to do it. See section ALSA for more.
* Sampling Conversion
A sound card channel may often accept only one sampling rate (and
quantization scheme), in which case the audio stream being fed into it may
require a sampling conversion. A sound server does this in a manner
that is transparent to the user.
* Volume
A sound server offers an easy way to adjust volume in a system wide fashion.
* Complexity
Sound cards usually offer multiple channels and devices. For instance my
built-in sound card offers the following playback settings:
Master, Headphone, PCM, Front, Front Mic, Front Mic Boost, Surround Center,
LFE, Line Line Boost, S/PDIF, S/PDIF Default, S/PDIF 16, Auto-Mute Mode,
Loopback Mixin, Rear Mic, Rear Mic Boost.
And the following capture (recording) settings:
Front Mic Boost, Line Boost, Capture, Capture 1, Digital, Input Source,
Input Source 1, Rear Mic Boost.
Such an array of devices might consfuse an ordinary user trying to simply
raise the volume of his computer's sound output. (Although, in this case
the "Master" playback setting is probably the way to accomplish this.)
Nonetheless, the abstraction offered by a sound server, alleviate the user
from having to familiarize himself with the specifics of his sound card.
Refer to this Archwiki article for an overview of Linux sound.
Also refer to this section of Fedora documentation.
------
| Alsa |
------
See section on ALSA.
------------
| pulseaudio |
------------
See section on Pulse Audio.
SoX (Sound Exchange) is a utility for converting between different sound
formats. It supports basically any known music format and has options for
applying numerous effects. According to its man page it's "the Swiss Army knife
of audio manipulation". For more see man page.
$ man sox
SoX is usually bundled with these utilities:
play - Play an audio file. Can apply transmormations and effects as with SoX.
rec - Record an audio file from an input device (e.g. microphone).
All three share the same man page.
soxi - A utility to display meta data of audio file.
Examples of sound file editing/conversion and playing with SoX:
* Convert to CD format:
$ sox filnam.wav filname.cdr
* Convert to CD format with cropping:
$sox filnam.wav filname.cdr trim start length
start and length can be specified as number of samples or as a time
(hh:mm:ss.pp where pp is fraction of a second)
* Concatenate sound files:
$ sox infile1 infile2 outfile
* Change pitch:
$ sox filein.wav fileout.wav pitch 200
* Change sampling rate
$ sox filein.wav fileout.wav rate 20k
* Add reverb effect
$ sox filein.wav fileout.wav reverb 50 # reverb range: 0-100
* Change volume
$ sox filein.wav fileout.wav gain -5 # reduces volume by 5 dB
* Copy a 10 second portion of infile starting at 6 seconds
$ sox infile outfile trim 6 10
* Play from 6 seconds upto 20 seconds, and then resume at the 5 seconds before
the end of the audio
$ play infile trim 6 =20 -5
----------------------------------
| Miscellaneous Audio applications |
----------------------------------
* xmms Audio Player
A very good audio player for X Windows with a graphical front end.
Has multiple skins and many features.
* Audacious
An Audio player with a modern (GTK+) graphical front end.
* Alsa utilities:
amixer, alsamixer - ALSA's command line and semi-graphical interface mixers.
aplay, arecord - ALSA's command line play and record utilities.
See section ALSA for more.
* ffmpeg
This is a utility to convert between any two video formats, but can also be
used to convert mpeg sound formats such as mp3.
See also here.
* hxplay
This is a video and audio player.
See this website for more.
* gnome-volume-control
This is a gui-control for playback/recording volume.
* xoscope
An application to turn your soundcard into a poor man's oscilloscope.
--------------
| Tag handling |
--------------
Normally an mp3 or ogg file have tags that identify certain information
about the sound file, such as Album title, Artist and more.
Here are a few utilities that can create and modify such tags.
* vorbiscomment
A utility to edit comments in an ogg/vorbis file. Example usage:
$ vorbiscomment -w -c labelfil musicfil.ogg
* mp3info and mp3info2
These are useful command line programs for displaying and setting tags on mp3
files.
* For tag1
$ mp3info ...
* For tag2
$ mp3info2 ...
* Other command line tag editing utils:
* id3tag
* id3v2
* easytag
A graphical tag editing utility.
For a complete list see link
----------
| CD Audio |
----------
Here is a list of various utilities to play and extract CD audio.
gnome-cd
A Gnome CD player - just plays CD's in your CDROM/DVD drive.
* sound-juicer (Sound Juicer CD Ripper)
It's a CD player and song extractor.
It extracts songs out of CD's and saves in various formats (ogg, flac, wav)
* cdparanoia (recommended)
A command line audio CD extracting utility which includes extra data
verification features.
Usage:
First create directory where you want files to be saved, and then run
cdparanoia from that directory
$ cdparanoia -B
In this form cdparanoia will extract all tracks from a disk and save in
"wav" format (an uncompressed format).
For other options including invdividual or selected track extraction refer to
its manual pages.
$ man cdparanoia
Some error codes related to the ripping process:
- A hyphen indicates that two blocks overlapped properly, but they were
skewed (frame jitter). This case is completely corrected by Paranoia
and is not cause for concern.
+ A plus indicates not only frame jitter, but an unreported, uncorrected
loss of streaming in the middle of an atomic read operation. That is,
the drive lost its place while reading data, and restarted in some
random incorrect location without alerting the kernel.
This case is also corrected by Paranoia.
e An 'e' indicates that a transport level SCSI or ATAPI error was caught
and corrected. Paranoia will completely repair such an error without
audible defects.
X An "X" indicates a scratch was caught and corrected. Cdparanoia will
interpolate over any missing/corrupt samples.
* An asterisk indicates a scratch and jitter both occurred in this general
area of the read. Cdparanoia will interpolate over any missing/corrupt
samples.
! A ! indicates that a read error got through the stage 1 of error
correction and was caught by stage 2. Many '!' are a cause for concern;
it means that the drive is making continuous silent errors that look
identical on each re-read, a condition that can't always be detected.
Although the presence of a '!' means the error was corrected, it also
means that similar errors are probably passing by unnoticed. Upcoming
releases of cdparanoia will address this issue.
V A V indicates a skip that could not be repaired or a sector totally
obliterated on the medium (hard read error). A 'V' marker generally
results in some audible defect in the sample.
To query CD (must have CD in drive):
$ cdparanoia -Q
* icedax - Another CD extraction command-line app
This is a sampling utility that dumps CD audio data into wav sound files
Examples:
* To access audio CD on home computer:
$ icedax -device 0,1,0 ...
* To read tracks 1-10 onto file fil.wav:
$ icedax -device 0,1,0 -t 1+10 fil.wav
* to merely get info:
$ icedax -device 0,1,0 -t 1+5 -J
See man page for more.
* ripit
A pearl script for ripping cd's. It calls one of three cd rippers to do the
actual ripping (dagrab, cdparanoia, cdda2wav, tosha, cdd).
* cdrecord (recommended)
A powerful utility for burning CDs. See also wodim.
It can burn data as well as audio onto CD's.
To record audio files onto a CD:
$ cdrecord -v speed=1 dev=/dev/cdrw:1,0 -audio filenam1.cdr filename2.cdr ...
cdrw is the device node name in the /dev directory for a read/writable
CD. Note, this is how it's named in my installation. It may differ for
other installations.
dev=/dev/cdrw:1,0 is the device info for the CD/DVD writer. It is actually
an abbreviation of dev=/dev/cdrw:0,1,0, however, when the first digit is a
zero it can be omitted.
The ":0,1,0" is a three field identifier specifying:
SCSI bus#, target# and lun#.
To obtain these three parameters for your CD/DVD writer, issue the command
$ cdrecord -scanbus
On my current system
SCSI bus# = 0 (and was therefore omitted, since it's the default)
target# = 1
lun# = 0
All files to be written to the CD have to be specified in the command line.
Once the command has completed writing the files, no subsequent tracks are
writable onto the CD since fixing will take place after the final
track.
If you don't wish to fix the CD at this point use the "-nofix" option
$ cdrecord -v speed=1 dev=/dev/cdrw:1,0 -nofix -audio filenam.cdr
This invocation will write filename.cdr onto the CD but not fix the CD, thus,
allowing subsequent writes of additional tracks on this CD.
With the last track you will need to fix it
$ cdrecord -v speed=1 dev=/dev/cdrw:1,0 -fix -audio lasttrack.cdr
otherwise, it will not be playable.
* readcd
This is a command line tool for reading audio CD's.
* cdda2ogg
Extract CD audio tracks and encode them in ogg/Vorbis format.
* cdda2wav
Extract CD audio tracks and encode them in wav format.
ALSA
********************************************************************************
* - ALSA -
********************************************************************************
Producing sound from the computer requires interfacing with the sound hardware,
which in turn requires kernel drivers for the sound card(s). ALSA is the
prevalent software bundle in Linux systems that provides drivers, kernel
modules, libraries and utilities to support direct sound production.
ALSA stands for Advanced Linux Sound Architecture and provides the lowest level
of sound support in Linux. ALSA features:
* An API (application programming interface) for sound card device drivers.
* Automatic configuration for a large assortment of sound card hardware.
* Can handle multiple sound cards and devices in a given system.
* Accompanying command-line utilities.
An alternative low level sound platform available for Linux is OSS. It supports
some legacy sound drivers. If ALSA doesn't support your sound hardware then
OSS might.
If no sound server (e.g. pulseaudio) is active then the application must
interact directly with ALSA. Not all applications will do so, whereas some
applications will only interface with ALSA.
The following utilities come with ALSA:
* amixer
This is a command line utility to view and configure mixer settings within
ALSA. Without arguments it displays the current mixer settings for the
default sound device. This program is handy for use in scripts.
See man page for more about usage and options.
For a mixer program with a more user friendly interface see alsamixer.
Example of setting master volume of default device to 50%.
$ amixer set Master 50%
* alsamixer
A ncurses mixer program for ALSA. Ncurses produces a GUI like appearance
on a text terminal. This program is similar to the amixer utility except
that the mixer settings are adjusted graphically.
Some useful key bindings:
* Turn volume up
W or PgUp - both channels; Q - left channel; E - right channel
* Turn volume down
X or PgDw - both channels; Z - left channel; C - right channel
* Quit program/help - ESC
* Show all playback devices of soundcard - F3
Show all capture devices of soundcard - F4
Show all playback and capture devices of soundcard - F5
* Select a different sound card - F6
For a complete list of key bindings refer to alsamixer's on screen help
by typing "h" or "F1". Also refer to its man page.
Note, if pulseaudio is active then it will be the default device and alsamixer
will display a single volume bar. The bar is actually composed of two half
width bars that can be adjusted indpendently.
* aplay
Plays sound files of various formats. Search man page for "-f" option for
a list of supported formats. Doesn't not support lossy formats such as mp3
and ogg/Vorbis. See SoX's play command line utility command for a more
versatile audio file player that supports audio editing, filtering and a
plethora of formats.
* arecord
Records sound files of various formats. It shares the same man page as aplay.
See SoX's rec utility for a more versatile audio file recorder.
Examples:
* Record sound until Ctrl-C or Ctrl-D is pressed:
$ arecord -f dat filname.wav
* Record a five second CD quality stereo clip
$ arecord -f dat -c2 -d 5 test.wav
* alsactl
This utility is used to fix problems that don't seem to fix themselves with
amixer or alsamixer.
For example to attempt to restore sound cards to their original configuration,
invoke
$ alsactl init
Refer to this Wikipedia article for an overview of ALSA.
-------------------------------
| Sound configuration with ALSA |
-------------------------------
A Linux installation running a desktop environment such as GNOME or KDE should
ordinarily not require ALSA to be configured manually. However, there are
situations in which configuring or tweaking ALSA settings may be required,
such as when running a window manager by itself.
The following is based on this Archlinux wiki webpage.
In Archlinux ALSA software comes bundled in two packages:
* ALSA driver modules usually come built in to the kernel.
* The alsa-utils package provides utilities such as alsamixer and aplay.
Note, in many distributions they are installed as part of the core system.
ALSA provides two systemd services:
alsa-restore.service and alsa-store.service
To verify proper working order, test if they are enabled
$ systemctl is-enabled alsa-store.service alsa-restore.service
The output should be:
static
static
Use aplay utilitity to test sounds. For example
$ aplay /usr/share/sounds/alsa/Front_Left.wav
$ aplay /usr/share/sounds/alsa/Front_Right.wav
If no sound comes out, use alsamixer utility to verify that the ALSA channels
for the default sound card are not muted. If, they are then unmute them.
In alsamixer press "m" to toggle between mute and unmute.
Here are a few examples illustrating usage of alsa utilities:
* To show settings of Master control
$ amixer info master
* To show a complete list of simple mixer controls
$ amixer scontrols
* As above, but also show settigs of mixer controls
$ amixer scontents
* to modify settings of controls
$ amixer cset
* In my ASUS H81I-PLUS the built in audio controllers are (using lspci):
00:03.0 Audio device: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor HD Audio Controller (rev 06)
00:1b.0 Audio device: Intel Corporation 8 Series/C220 Series Chipset High Definition Audio Controller (rev 05)
(note Xeon E3-1200 refers to the CPU)
* On my other i5 computer
00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05)
* To test speakers use
$ speaker-test -c 2
* To list available audio devices:
$ aplay -L
or
$ aplay -L | grep :CARD
* To list audio cards
$ lspci
Look for cards that are described as audio
Note, if probing for audio cards in a running virtual machine, it's not the
real card that will show up, but rather the virtual machine's emulated card.
In addition to ALSA, a sound server might be desired. As explained above, a
sound server provides an intermediate layer between the user application and
ALSA driver modules. It provides higher level functionality such as mixing
sound from multiple applications.
A popular and easy to use sound server is pulseaudio (see section Pulse Audio),
although it does have significant latency. A lower latency sound server is
Jack/Jack2.
For professional or semi-professional sound production, a special low latency
compilation of the linux kernel can be used.
--------------------------
| ALSA configuration files |
--------------------------
Although normally not needed, ALSA is configurable through various files.
ALSA stores its settings in /var/lib/alsa/asound.state, however, this is not
a file to edit directly.
Various ALSA configuration files are located in directory /usr/share/alsa.
ALSA's main configuration file is /usr/share/alsa/alsa.conf, although this file
should not be edited directly either.
The system wide configuration file /etc/asound.conf, and the user configuration
file ~/.asoundrc may be used to configure ALSA.
Note, user configuration settings override system configuration settings.
The configuration file can be used to do such things as
* Change the default output from an analog to digital output device.
* Create custom ALSA devices, such as mixers
* Resampling, and more.
For more about asoundrc.txt read /usr/share/doc/alsa-lib/asoundrc.txt.
It is also highly recommended to read this webpage for a close look at ALSA
and how to configure it.
The directory /etc/alsa contains various other configuration files.
Here are some things I did in configuring my Asus desktop running Archlinux with
only a window manager and without pulseaudio.
* Example 1.
I changed ALSA's default device from card 0 to card 1. To do this I added to
/etc/asound.conf (or .asoundrc) the following lines:
pcm.!default { type hw card 0 }
pcm.default.card 1
The first line causes the previous definition of pcm.default to be overridden.
The second line defines the default sound card to be 1.
* Example 2.
In this example I create a resampling ALSA device and make it the default
device to which to stream audio. To this I entered the following lines into
the my user configuration file .asoundrc
# I'll define a plug device (does auto sampling conversion etc)
pcm.plug0 = {
type= plug
slave= {
pcm "hw:1,0"
},
};
# I'll also define it to be the default device
pcm.!default {
type= plug
slave= {
pcm "hw:1,0"
},
};
# We'll do the same with ctl
ctl.!default {
type= plug
slave= {
pcm "hw:1,0"
},
};
The keyword "plug" refers to an ALSA device that automatically converts
sampling rates and number of channels in accordance with the hardware's
request.
The keyword "slave" refers to the actual hardware sound device that plug will
forward the converted sound.
So what I did here was to define an ALSA device that resamples an audio stream
(plug), have it forward the resampled data to hardware device 1,0 (which
refers to card 1, device 0), and made it the default device (default),
Now, how did I know to forward the resampled audio stream to hardware device
"hw:1,0"? Keep reading and this will become apparent.
Detailed information on ALSA sound configuration can be obtained from various
files in the directory /proc/asound. I illustrate some of the information I
obtained from /proc/asound for my Asus desktop.
* The file /proc/asound/cards contains a descriptive list of all cards
e.g.
0 [HDMI ]: HDA-Intel - HDA Intel HDMI
HDA Intel HDMI at 0xf7e14000 irq 38
1 [PCH ]: HDA-Intel - HDA Intel PCH
HDA Intel PCH at 0xf7e10000 irq 37
For this example two cards are listed.
The first is the digital audio channel feeding the HDMI port.
The second is a standard audio card with a number of analog and digital
input/output channels.
* Each card has an associated directory:
/proc/asound/card0
/proc/asound/card1
* Each card has at least one device defined for it, and an associated
subdirectory named after each device.
For example the devices for card0 are
/proc/asound/card1/pcm0c
/proc/asound/card1/pcm0p
/proc/asound/card1/pcm1p
/proc/asound/card1/pcm2c
The "c" at end of the name stands for "capture" (recording device)
and "p" stands for "playback" (an output device).
"pcm" stands for pulse code modulation which is the sampling technique in use
(see background section of sound and multimedia).
* Each device comes with an info file.
e.g. /proc/asound/card1/pcm0c/info
The content of the file is:
card: 1
device: 0
subdevice: 0
stream: CAPTURE
id: ALC887-VD Analog
name: ALC887-VD Analog
subname: subdevice #0
class: 0
subclass: 0
subdevices_count: 1
subdevices_avail: 1
* For each device at least one subdevice is present with its own directory
and information file.
e.g. /proc/asound/card1/pcm0c/sub0, /proc/asound/card1/pcm0c/sub0/info
* When configuring ALSA it is often necessary to know the hardware device name
of a device from a sound card. Here too /proc/asound is helpful.
For example, I want to know the hardware device name for the first capture
device on card1. I simply refer to the info file for that device.
e.g. /proc/asound/card1/pcm0c/info
The relevant lines from that file (see above) are:
card: 1
device: 0
subdevice: 0
Thus, the hardware device name to use in an ALSA configuration file would be:
"hw:1,0,0" or simply "hw:1,0" since only one subdevice exists.
"hw" stands for hardware.
Pulse Audio
********************************************************************************
* - Pulse Audio -
********************************************************************************
Pulseaudio is a sound server for Linux. Here is a quote from its man page:
"PulseAudio is a networked low-latency sound server for Linux, POSIX and
Windows systems".
Pulseaudio often comes bundled and activated with the GNOME or KDE desktops.
However, that may not be the case for other desktops, check your desktop
documentation. Alternatively, launch your desktop and from a terminal issue
the command
$ ps -eaf | grep pulse
If you get something like this
gdm 1107 1062 0 Aug16 ? 00:00:00 /usr/bin/pulseaudio --daemonize=no
jdoe 1859 1848 0 Aug16 ? 00:02:38 /usr/bin/pulseaudio --daemonize=no
then you can infer that pulseaudio is running by your display login manager
(gdm) and user jdoe. Each additional user that is logged in to the desktop will
have their own instance of pulseaudio running. In other words pulseaudio is run
per user.
Note, if you try to run pulseaudio as root, it warns you against it:
"This program is not intended to be run as root (unless --system is specified)."
My guess is, it probably poses a security risk and therefore discouraged.
If you are using a window manager without a desktop environemnt, you are likely
to need to install and activate pulseaudio manually.
The base package for your distribution is likely to be named "pulseaudio".
Your distribution's repository may have various pulseaudio plugins and
associated utilities. The simplest way to discover those is to use your package
manager's search facility. For example, in Fedora issue the command
$ dnf search pulseaudio
Some of the (many) results I got are:
pulseaudio-equalizer.noarch : A 15 Bands Equalizer for PulseAudio
pulseaudio-module-x11.x86_64 : X11 support for the PulseAudio sound server
pulseaudio-module-jack.x86_64 : JACK support for the PulseAudio sound server
pulseaudio-utils.x86_64 : PulseAudio sound server utilities
Before getting into the nitty gritty, I should mention one very useful tool
for managing pulseaudio: pavucontrol
It's a very good graphical interface to pulseadio.
----------------
| Manual startup |
----------------
Most users (running a desktop) will never need to launch, configure or even
tweak pulseaudio. But for those who do, read on.
To start it manually:
$ pulseaudio --start
To end it manually:
$ pulseaudio --kill
Some other pulseaudio start commands for specific contexts:
$ start-pulseaudio-x11
$ start-pulseaudio-kde
This is an issue I had at some point:
At present I need to manually chmod owner of /dev/snd device files to
jdoe (substitute your user name) for audiopulse to operate properly under
jdoe's account (it may be I simply didn't belong to audio group).
$ sudo chmod jdoe /dev/snd/*
Alternatively change permissions for other
$ sudo chmod a+rw /dev/snd/*
Note, the change of permissions will not persist across boots.
To control the pulse audio server use command pactl.
* Example - loading module module-loopback
This module couples microphone to speakers with a given latency.
For minimum latency possible
$ pactl load-module module-loopback
For a one millisecond latency (you will hear a slight echo)
$ pactl load-module module-loopback latency_msec=1
To make the change permament add the following line to the pulse audio
config file (either /etc/pulse/default.pa or ~/.config/pulse/default.pa):
load-module module-loopback
To see which sound devices are being used by me (whoami), issue
$ fuser -v /dev/snd/*
-------------
| GNOME issue |
-------------
(The following issue may have been resolved by the time you are reading this).
In GNOME the pulse audio daemon runs individually for each user logged in. The
first user to log in will grab the default device. This means that other users
will not be able to use it (and if that is the only playback/input device then,
they will not be able to play or record sounds).
I found a "fix" for this at this webpage.
The idea is to have a user who is always logged in (or at least the first to
log in).
For the "always logged in user" do as follows:
$ cp /etc/pulse/default.pa ~/.config/pulse/
At the end of the file add the line:
load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1
Restart pulseaudio:
$ pulseaudio -k # -k = --kill
$ pulseaudio --start
Note, under GNOME it should respawn (get restarted automatically).
For all other users create a one line file as such:
$ echo "default-server = 127.0.0.1" > ~/.config/pulse/client.conf
What happens now is that all users' pulseaudio streams get routed to the default
server 127.0.0.1 (local machine), and since the "pulse" user is the only one
who has that server module loaded, that is where the pulse traffic will be
forwarded.
--------------------------------
| Pulseaudio configuration files |
--------------------------------
If pulseaudio is incorrectly selecting (playback/capture) devices then manual
configuration will be necessary.
/etc/pulse/default.pa is the default configuration file.
If you want whatever configuration changes you make to apply globaly edit this
file.
If you want the configuration to apply to a single user create a local version
~/.config/pulse/default.pa
The local configuration always overrides the global configuration.
Other configuration files can be found in the directory /etc/pulse.
-------
| pacmd |
-------
pacmd is a command line utility for reconfiguring pulse audio during runtime.
Being a command line tool it can be invoked from startup scripts such as
.xinitrc, .bash_profile.
If you prefer a GUI application then install pavucontrol.
Run pacmd without arguments to get a shell interface to pulse audio
$ pacmd
Type "help" to get a list of commands available in the shell environment.
Type "exit" to quit the shell environment.
Here are some examples of using the pacmd command.
* To list sinks (playback devices):
$ pacmd list-sinks | egrep -i 'index:|name:'
An asterisk (*) appears next to default sink.
* To list sources (capture devices):
$ pacmd list-sources |egrep -i 'index:|name:'
* To change the default sink or source:
$ pacmd set-default-sink 0
$ pacmd set-default-source 4
In the above two commands the index of the sink or source was used.
However, it's better to use the sink's (or source's) device name since the
device index is established in the order devices are detected, which can vary
from one boot to another. This is especially true for sinks and sources
from pluggable USB devices (e.g. USB sound card or microphone).
The device name for sinks can be obtained as follows
$ pacmd list-sinks
I include a few lines of the output
* index: 1
name: <alsa_output.pci-0000_00_1b.0.analog-stereo>
driver: <module-alsa-card.c>
flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY DYNAMIC_LATENCY
state: SUSPENDED
...
The field next to "name:" is the device name:
"alsa_output.pci-0000_00_1b.0.analog-stereo"
Note, the brackets <> are not part of the name.
Similarly I can list sources and obtain their device names from the output
$ pacmd list-sources
In issuing the pacmd it is also acceptable to reference sinks and sources with
their hardware address (hardware addresses were discussed in the context of
ALSA. See above). For example to load modules, I reference the sink and source
devices using their hardware address hw:1,0
$ pacmd load-module module-alsa-sink device=hw:1,0
$ pacmd load-module module-alsa-source device=hw:1,0
Note, "hw:1,0" is the hardware identifier for both the playback and capture
devices. They may, however, differ. Check your local ALSA hardware device
mapping.
To load/unload a module
$ pacmd load-module name [arguments]
$ pacmd unload-module index|name
To describe a module
$ describe-module name
Any of these commands can be made a permament part of the configuration by
placing them inside the file /etc/pulse/default.pa or ~/.config/pulse/default.pa
-----------------
| Further reading |
-----------------
Read the man page on these commands and topics:
$ man pacmd
$ man pactl
$ man pulse-cli-syntax # Describes commands and syntax
$ man default.pa
Read more about pulseaudio in this Archwiki webpage
Image Manipulation and Conversion Utilities
********************************************************************************
* - Image Manipulation and Conversion Utilities -
********************************************************************************
Linux repositories usually contain many software packages for image manipulation
and conversion utilities. The following is a partial list. For some of the
packages with which I am more familiar with, I expound with examples.
Before proceeding with the tools, here is a short (but incomplete) list of
common image formats (bitmap formats):
* jpeg - A very popular format employing JPEG standard for lossy compression.
Q factor, ranging from 1 to 100 controls quality.
File extension are .jpeg, .jpg, .jpe
An associated container format containing images encoded by the jpeg
algorithm.
File extensions for this container format are .jfif, .jfi, .jif
* gif - Lossless compressed format (very common) (See webpage)
* tiff - Originally a format used by scanners; supports LZW compression
File extension are .tiff or .tif
* bmp - PC bitmap data, Windows 3.x format
* Netpbm formats (Netpbm PGM "rawbits" image data)
- ppm - Netpbm color image format
- pgm - Netpbm grayscale image format
- pbm - Netpbm bi-level (normally Black & White) image format
* pnm - Netpbm superformat. It refers collectively to PBM, PGM, and PPM
The following formats feature vector graphics capabilities:
* ps - Postscript language. Originally developed by Adobe Systems as a printer
language. For more about postscript read this Wikipedia article.
* pdf - Short for "Portable Document Format", this format was designed by Adobe
Systems to present documents in a device indpendent manner. It was
eventually released as an open standard. Each page of a pdf file is
interpreted independently of other pages in the document. The pdf format is
based on a simplified Postscript language. Although lacking some of its
features, it adds others, such as transparency. For more about pdf read
this Wikipedia article.
-------------
| Ghostscript |
-------------
Ghostscript is a classic utility that converts Adobe postscript or Adobe pdf
documents to any number of printing languages including fax format. It usually
comes bundled as part of a core Linux installation, or at least included in the
distribution's repository.
Example usage:
* To convert file.ps to file.g3 (fax format)
$ gs -dNOPAUSE -sDEVICE=faxg3 -sOutputFile=file.g3 file.ps
* Note, the command can be invoked as
$ ghostscript ...
(/usr/bin/ghostscript is a symbolic link to /usr/bin/gs)
Refer to its man page for a more comprehensive description including examples
$ man gs
Ghostscript can be used to filter out images, text, and vector graphics from a
pdf file as follows.
* To filter out images
$ gs -o outfile.pdf -sDEVICE=pdfwrite -dFILTERIMAGE infile.pdf
* To filter out text
$ gs -o outfile.pdf -sDEVICE=pdfwrite -dFILTERTEXT infile.pdf
* To filter out vector graphics
$ gs -o outfile.pdf -sDEVICE=pdfwrite -dFILTERVECTOR infile.pdf
See man pages for more
$ man gs
Also see this web page.
libtiff-tools is a suite of tools for processing tiff images.
For a complete list see the libtiff Website.
Examples of what's included and how to use the tools:
* tiffcp - Concatenate individual tiff files into one file.
Usage:
$ tiffcp src1.tif src2.tif dest.tif
* tiff2bw - Convert color tiff image to grayscale (no thresholding done)
$ tiff src.tif dst.tif
Can specify a compression scheme with -c option
* ppm2tiff - Convert PPM, PGM or PBM format images to tiff format
Can specify a compression scheme with -c option, and resolution with -R.
e.g.
$ ppm2tiff -c jpeg -R 200x200 in.ppm out.tiff
* tiff2pdf - Convert tiff file (even a multi-page tiff file) to pdf.
Usage:
$ tiff2pdf -F -pletter -o filname.pdf filname.tiff
* tiffdither - Dither a gray scale image to a bilevel (black and white)
image. Good for faxes.
This is a set of command line tools for graphics image manipulation and
conversion. See the Netpbm website for more about it.
Also see the Netpbm user manual.
Also see this Wikipedia article.
Netpbm works with three main formats:
* pbm - monochromatic format (white and black)
* pgm - gray scale format (either 0-255 scale or 0-65535 scale)
* ppm - full color format (256 colors per RGB channel; total 16,777,216 colors)
Collectively these formats are known as pnm, and a file bearing this extension
is simply one of the above files. The two byte magic number at the begining of
the file identifies which kind of file it is.
A more general and abstract format which can represent arbitrary data is pam.
These formats are normally used as intermediate formats in the manipulation or
conversion process. The netpbm tools are used mainly together with Unix
pipelines to accomplish the desired image conversion or manipulation. That is,
the conversion/manipulation process is broken into a series of steps, whereby
the result of one step is fed into the next step via a pipeline, and so forth.
The Netpbm utilities are meant to be simple, and as such don't have as many
features as Imagemagick or Gimp.
I offer a few examples of how to use this set of tools to accomplish various
tasks.
* Convert from Netpbm color format to PC format.
$ ppmtobmp file.pnm > file.bmp
* Convert a pnm graphics file to postscript at 72 dpi.
Line rendering is used so it's compact but poor quality.
$ pgmtopbm sig.pnm | pbmtolps -dpi 72 > file.ps
Can also use ImageMagick, or Gimp (see respective subsections.)
* Convert a pnm graphics file to an encapsulated postscript file
$ pbmtops -noturn -scale -0.3 filename.pnm > filename.eps
* Convert gif to encapsulated postscript file
$ giftopnm filename.gif | pnmtops -noturn > filename.eps
* Take a grayscale image and produce a black and white image based on some
threshold value
$ pamthreshold -threshold=0.6 src.pnm
In the example the threshold value is 0.6 or 60%.
See man page for other options.
* Take a grayscale image and produces a dithered black and white image that
appears to have varying levels of gray when looked at from a suitable
distance. The source file should be a gray scale image (pgm) (although no
checking for that takes place.)
$ pamditherbw [options] src.pgm
See man page for options and how to achieve desired dithering and effects.
Besides image format conversion tools, various image manipulation tools are
bundled into the Netpbm suite.
* Rotate an image
$ pnmrotate angle src.pnm
* Concatenate images
$ pnmcat src1.pnm src2.pnm ...
* Select a rectangular region from an image
$ pnmcut [-left=] [-right=] [-top=] [-bottom=] [-width=] [-height=] [-pad=] src.pnm
Use the different options to specify the position and size of the rectangular
region. Read man page for more.
* To annotate an image use ppmdraw
$ ppmdraw -script=script
$ ppmdraw -scriptfile=filename
The script contains drawing instructions for lines, shapes and text.
See man page for how to construct a script and sample scripts.
The full list of Netpbm programs are found here.
-----------------------------------------
| Pdftk - manipulation tool for pdf files |
-----------------------------------------
pdftk is a very powerful command line tool for manipulating pdf files.
Some of its uses are to assign a password to the document, concatenatation of
two or more documents, rotating documents, split or merge pdf files.
(For installing on Archlinux see below.)
Some examples:
* Extract pages 3-4 from srcdoc.pdf and save as outdoc.pdf
$ pdftk srcdoc.pdf cat 3-4 output outdoc.pdf
* Concatenate files file1.pdf and file2.pdf into outdoc.pdf
$ pdftk file1.pdf file2.pdf cat output outdoc.pdf
* Concatenates pages 1-3 of infile1.pdf with pages 2-4 of infile2.pdf
$ pdftk A=infile1.pdf B=infile2.pdf cat A1-3 B2-4 output outfile.pdf
* Apply a password to a pdf file
$ pdftk infile.pdf cat output infileenc.pdf user_pw mypassword
* Apply a user password and owner password to a pdf file
$ pdftk infile.pdf cat output infileenc.pdf user_pw userpass owner_pw ownerpass
* Apply a password to a pdf file with printing allowed
$ pdftk infile.pdf cat output infileenc.pdf user_pw mypassword allow Printing
* Uncompress a pdf file
$ pdftk infile.pdf cat output outfile.pdf uncompress
normally pdf file's use compression to store text
inside
* To compress a pdf file
$ pdftk infile.pdf cat output outfile.pdf compress
* Decrypt a pdf file
$ pdftk infile.pdf input_pw ownerpass output outfile.pdf
Note, owner password (ownerpass) is required for decryption.
* Concatenate pages 1-3, while rotating page 3 by 90 degrees (east)
$ pdftk infile.pdf cat 1 2 3east output outfile.pdf
Other rotation values are west (270), south (180), north (0), left (-90),
right (+90), down (+180). The latter three make relative adjustments to
the orientation of the page.
* Rotate entire document by 90 degrees (east)
$ pdftk infile.pdf cat 1-endeast output outfile.pdf
* For usage syntax issue command without arguments
$ pdftk
* For complete help
$ pdftk --help | less
or consult man page.
To install pdftk on Archlinux goto AUR and download libgcj17-bin and pdftk-bin
snapshots (which are pre compiled binary packages from the debian repository).
Untar and install using makepkg -sri.
Also possible to download pdftk snapshot (without -bin), but then it will be
necessary to download gcc-gcj from the AUR which is more difficult and time
comsuming to compile.
Pdftk is not available in Fedora's repositories any more, but can be obtained
as a SNAP. Refer to section on SNAPS
---------
| Poppler |
---------
poppler is a library and a set of command line tools for manipulation
of pdf files.
The following are some command lines utilities included in this library:
* pdfunite - concatenate several pdf documents.
* pdfseparate - extract individuals pages from a pdf document.
Usage:
$ pdfseparate [-f firstpage] [-l lastpage] file.pdf pattern
Example:
$ pdfseparate -f 1 -l 2 file.pdf pageno-%d.pdf
This invocation creates the files pageno-1.pdf (page 1 of file.pdf) and
pageno-2.pdf (page 2 of file.)
* pdfdetach - extract embedded files (attachments) from pdf file.
See man page for a full description of the command and usage information.
* pdffonts - list fonts and font related info in pdf document.
$ pdffonts file.pdf
* pdfimages - extract and save images from a pdf file.
$ pdfimages file.pdf imageroot
* pdfinfo - print the contents of the "Info" dictionary of the pdf file
and other useful info. See man page for more.
* pdftocairo - convert pdf to bitmap image format or vector graphics
format.
$ pdftocairo src.pdf out.png|out.jpeg|out.tiff|out.pdf|out.ps|out.eps|out.svg
* pdftohtml - convert pdf files into HTML, XML and PNG images.
* pdftoppm - convert pdf files into color portable pixmap format (ppm)
(see susbsection on Netpbm tools). Each page is converted into one ppm file.
$ pdftoppm file.pdf page
Will produce the image files page-1.ppm, page-2.ppm, etc.
This command has many options. Refer to man page for more.
* pdftops - convert pdf file to a postscript file.
$ pdftops file.pdf file.ps
$ pdftops file.pdf > file.ps
This command has many options. Refer to man page for more.
* pdftotext - extract text from a pdf file
$ pdftotext src.pdf dst.txt
See man page for options used to limit pages and crop areas from which to
extract text.
* pdfsig - verify digital signature of a pdf document
$ pdfsig
---------
| Gphoto2 |
---------
* gphoto2
A command line digital camera interface application.
Some examples to illustrate usage:
* To detect digital camera
$ gphoto2 --auto-detect
* To list folders on camera
$ gphoto2 -l
* To list files
$gphoto2 -L
* To get files 1 through 20 according to numbers in file-list
$gphoto2 -p 1-20
* To delete files in folder /store_00010001/DCIM/100K4530
$gphoto2 -f /store_00010001/DCIM/100K4530 --delete-file 1-20
* To delete all files in folder /store_00010001/DCIM/100K4530
$gphoto2 -f /store_00010001/DCIM/100K4530 -D
-------------
| ImageMagick |
-------------
ImageMagick is a powerful suite of tools to display and manipulate
images (for more see their website).
There are a number of command line tools available in this suite. The most
commonly used ones are display and convert. They take numerous command line
options. For how to use command line options see this webpage)
(I) display - Utlity to display image
Examples:
* Display a gif image
$ display img.gif
* Display jpg image rotated by 45 degrees
$ display -rotate 45 img.jpg
* Display images bearing "jpg" extension in current directory as a slide show
$ display -delay 3 *.jpg
The "-delay 3" option sets the time between loading images to three seconds
* Display image so that width is scaled up or down to 700. Height will be
scaled so as to preserve aspect ratio.
$ display -sample 700 *.jpg
(II) convert - Convert from one format to another different format with
the option of applying various transformations. Can also apply
transformation while retaining the same format.
Note, in some installations the command name is "magick", rather than
"convert".
Examples:
* Threshold image
$ convert imgsrc.tiff -threshold 50% imgdst.tiff
Degree of threshold is specified in percent (In example 50%)
* Append two images together side by side, and rotate counter-clockwise:
$ convert img1.tif img2.tif -append -rotate -90 imgout.tif
* Changing image size
$ convert -sample 700 imgsrc.jpg imgdst.jpg
Changes size of imgsrc.jpg to 700 pixels wide and saves as imgdst.jpg
You can also specify a reduction by percentage
$ convert -sample 50% imgsrc.jpg imgdst.jpg
Read a little further down on the use of the -thumbnail option, which
incoporates a number of actions, including resampling and filtering to
enhance the appearance of the image, coupled with a reduction in file size.
* cropping
$ convert -crop 500x700+100+200 imgsrc.jpg imgdst.jpg
Crops imgsrc.jpg @ offset (100,200) to size (500,700); saves as imgdst.jpg
* Change one color (e.g. white) to another (e.g. blue)
$ convert -fill blue -opaque white srcimg dstimg
* Make a certain color (e.g. white) transparent
$ convert -transparent white srcimg dstimg
* Convert to pdf:
$ convert img.tiff img.pdf
* Resize an image to 50% and increase its brightness by 25%
$ convert -resize 50% -modulate 125% srcimg dstimg
* Change color depth per pixel:
$ convert -depth 24 srcimg dstimg
* Reduce number of colors in colormap of image (assuming image employs an
indexed color scheme)
$ convert -colors 2 srcimg dstimg
* Draw option allows annotating image with shapes and text.
e.g. -draw 'rectangle 20,20 100,100'; -draw 'text 50,20 A'
can also specify font and fontsize
e.g. -font Times -pointsize 72
* Example of annotating with rotation
$ convert a.tif -rotate 45 -font Times -pointsize 90 -draw 'text 400,300 SAMPLE' -draw 'text 600 500 SAMPLE' -rotate -45 b.tif
Changing the resolution attribute (doesn't change image)
* Change to 300x300 resolution
$ convert -density 300 srcimg dstimg
* Change to 72x90 resolution
$ convert -density 72x90 srcimg dstimg
* Convert from tiff to pdf while applying compression
$ convert a.tif a.jpg; convert a.jpg a.pdf
* Can also rotate or apply other options along the way
$ convert a.tif -rotate 90 a.jpg; convert a.jpg a.pdf
* Changing brightness and/or contrast
$ convert -brightness-contrast 20x30 src.tif dst.tif
The 20x30 operand specifies that brightness should be increased by 20
and contrasst by 30. Use negative numbers for a reduction in brightness
and/or contrast.
* Applying a median filter (good for despeckling)
$ convert -median 3 src.ppm dst.ppm
-median option takes geometry operand specifying size of median filter
(e.g. 3, 4x5)
* Create an image gray.png of solid gray
$ convert -size 100x150 canvas:gray gray.png
In this more advanced example, an unfilled red rectangle of width 2 is
drawn on the canvas, followed by an annotation of the text "Gray" colored
blue in the center of the drawing area
$ convert -size 100x150 canvas:gray -stroke red -strokewidth 2 -fill none -draw 'rectangle 5,5 95,95' -stroke none -fill blue -gravity center -draw 'text 0,0 Gray' gray.png
(III) mogrify - Process image as with convert except overwrite original
image.
For example: To invert pixel colors
$ mogrify -negate img.gif
* If you do not wish to overwrite the original image, use the -path option.
For example
$ mogrify -path ~/processedimgs -negate img.gif
* Mogrify can also be used for processing many images at once:
$ mogrify -path ~/processedimgs -negate *.gif
In this case all gif images in the working directory will be processed,
and the outputs placed in the directory ~/processedimgs
(IV) montage - Useful for pasting images side by side or one above the other
(see also -append option used in convert utility above)
* Example:
$ montage a2.tif a1.tif -tile 2x1 -geometry +0+0 out.tif
This command invocation tiles a2.tif and a1.tif horizontally without spacing
in between.
* Note, if using the latter with tiff2pdf, then force 8 samples per pixel
using -depth option, as such:
$ montage a2.tif a1.tif -tile 2x1 -geometry +0+0 -depth 8 out.tif
(V) identify - Provide information on imgfile
Example:
$ identify --verbose imgfile
imgfile can be of any image format recognized by ImageMagick.
For other tools see man page
$ man ImageMagick
For more detailed information, refer to ImageMagick web site.
Also see this webpage.
Troubleshooting:
* If you get an error: "convert: attempt to perform an operation not allowed
by the security policy `PDF' @ error/constitute.c/IsCoderAuthorized/408"
See this webpage.
Edit /etc/ImageMagick-6/policy.xml
And comment line (note, intended to be viewed on a broswer):
< policy domain="coder" rights="read | write" pattern="PDF" />
i.e. <-- < policy ... > -->
-----------------------------
| Thumbnails with ImageMagick |
-----------------------------
In general, images found in websites are compact versions of the original photo
image. It is necessary to use reduced size image files to reduce storage
requirements on the host server, and perhaps more importantly to reduce network
load when accessing the webpage. This not only improves the browsing experience
resulting from a faster download of the webpage, but can also translate into
significant cost savings for frequently accessed webpages containing many
images.
For example, a typical photo taken with a modern phone camera, and compressed
with jpeg, can easily take up 3MB. If a particular webpage contains 10 such
images, then each time someone accesses the webpage more than 30MB of network
traffic is generated. ImageMagick is very useful for creating compact images
for use in websites. The commandline tools in the ImageMagick suite are
particularly amenable to batch processing, thus, making them suitable when there
are many images to reduce.
I'll give a simple example of this using the -thumbnail option.
To reduce myimg.jpeg to a quarter of its linear dimensions (one-sixteenth of
its area) use
$ mogrify -path thumbnails -thumbnail 25% -quality 75 myimg.jpeg
Alternatively, use convert to achieve the same result
$ convert myimg.jpeg -thumbnail 25% -quality 75 thumbnails/myimg.jpeg
One thing to be aware of in using the thumbnail option is that meta data is
stripped off the image (same as using -strip option). Much of the meta data
my not be of concern, but certain settings could be important to preserve. For
example, the date a photo was taken. A particularly important setting that
is relevant to photos taken with a digital camera is the image orientation.
This too will be stripped away, and when viewed, the image may appear with the
incorrect orientation. To prevent this use the -auto-orient option. For
example,
$ convert myimg.jpeg -thumbnail 25% -auto-orient -quality 75 thumbnails/myimg.jpeg
You may manipulate meta data with the exiftool.
In general, the larger the reduction in image size that you specify, and the
lower the quality factor you allow, the greater reduction in file size that
can be anticipated.
Before sending off your reduced size images or incorporating them into your
webpage, its worthwhile to try different image sizes and quality factors, and
decide on a reasonable compromise between visual appeal and file size.
The use of the -thumbnail option is superior to the use of the -sample option
(discussed in the previous section), as it not only smooths out the reduced
image (thus enhancing its presentability), but also results in smaller files.
A good article on using ImageMagick tools for reducing image file sizes can be
found here. The article provides a more indepth treatment of this topic,
including a presentation of the theory behind it, and and a survey of various
filtering options.
Another article I found useful can be found here.
-----------------------
| Scanned images to pdf |
-----------------------
The following outlines a procedure for thresholding a scanned tiff image or
images and converting to a pdf document.
Assume the scanner software outputs a uncompressed grayscale or color tiff
file fil-src-pg01.tif and embeds resolution information in the file.
For example if using SANE's scanimage utility the document can be scanned
as such:
$ scanimage -x 210 -y 297 --resolution 200 --mode Gray --format=tiff > fil-src-pg01.tif
* Step 1: Threshold each file
$ convert fil-src-pg01.tif -threshold 50% fil-th-pg01.tif
* Step 2: Use indexed color map for tif files
$ convert -colors 2 fil-th-pg01.tif fil-mono-pg01.tif
Note, this is not an essential step.
* Step 3: Compress with Gimp or convert (Imagemagick).
In Gimp export to fil-comp-pg01.tif.
When prompted for compression method select LZW or JPEG.
Save as fil-comp-pg01.tif
Alternatively, use convert utility (see subsection on Imagemagick)
$ convert fil-mono-pg01.tif -compress jpeg fil-comp-pg01.tif
* Step 4: If doing this for more than one image then concatenate all
compressed and thresholded tiff files together:
$ tiffcp fil-comp-pg01.tif outfil.tif
* Step 5: Convert to pdf
$ tiff2pdf outfil.tif outfil.pdf
Check size to make sure file is not large, otherwise check to see if
compression is being utilized.
If using convert utility the entire procedure can be accomplished in one step:
$ convert fil-src.tif -threshold 50% -colors 2 -compress jpeg 0.9 output.pdf
---------------
| Image viewers |
---------------
I found a list of image viewers in a Fedora Magazine article.
* eog
* eom
* eyesite
* feh
* geeqie
* gliv
* gpicview
* gThumb
* gwenview
* nomacs
* qiv
* ristretto
* shotwell
* viewnior
* xfi
* xzgv
* sushi (press space bar in Gnome File Manager to show image using sushi)
See this webpage for more.
This is a very powerful Perl based command line tool used to read/write/manipulate
metadata in all sorts of files. This is particularly useful for multimedia
files, but not limited to.
To display the metadata of a jpeg file
$ exiftool img.jpeg
A table of tags with their corresponding values will be displayed.
The tag descriptions may differ from the actual Tag ID, which is the computer
readable equivalent of a tag name.
To display using Tag IDs rather than descriptions
$ exiftool -s img.jpeg
The Tag ID may be less descriptive, but you will see the actual tag name used
in the file. That is also the name you need to provide exiftool when
specifiying tag names.
A few examples of tag descriptions and tag IDs:
-----------------------------------------------
Description Tag ID
-----------------------------------------------
Modify Date ModifyDate
Resolution Unit ResolutionUnit
File Modification Date/Time FileModifyDate
To display in csv format
$ exiftool -csv img.jepg
Exiftool is also capable of fully or selectively duplicating metadata from one
file to another. For example to fully apply the metadata from srcfile to
dstfile, type
$ exiftool -tagsFromFile srcfile dstfile
Exiftool is particularly useful within scripts to automate tag manipulation
for a collection of files.
The Image::ExifTool library provides a set of Perl modules to read/write meta
data in files in many formats. See here for more.
The tool comes with a very extensive man page, replete with examples.
$ man exiftool
or simply
$ exiftool
Further reading:
* Official Website
* man pages or here
* Image::ExifTool Perl library
* Tags names
Video
********************************************************************************
* - Video -
********************************************************************************
My experience with video is very limited, so this section is lacking.
---------------
| Video Formats |
---------------
There are many video formats out there. See this Wikipedia aritcle for a list
of them. Some of the notable ones are
* MPEG
MPEG (Moving Picture Experts Group) is an audio and video standard for
compression and streaming of video and audio. The MPEG standard has evolved
over the years. The most notable are MPEG-1 (the first video/audio
standard, includes the still popular mp3 format), MPEG-2, MPEG-3 (merged with
MPEG-2), MPEG-4. File extensions such as .mp2, .mpeg, .mpg, .mp3, .mp4,
.m4p, .m4v (Apple) are all associated with various FFmpeg audio, video and
multimedia formats. H.264 or MPEG-4 Part 10 is a video compression standard
used in many multimedia formats.
* ogg Video
Ogg is an open container format for audio and video storage and streaming.
Ogg Video is the associated video format. The audio that comes with Ogg Video
can be Vorbis or FLAC format.
File extensions may be .ogv or .ogg
This format is free and unencumbered by software patents.
* WebM
Created for HTML5 video. File extension is .webm
* Flash Video (FLV)
A video format used by Adobe's Flash Player using H.264 (video compression)
and AAC (audio compression). File extension is .flv
* F4V
The successor to FLV. File extension is .f4v
* QuickTime File Format
Apple's quicktime multimedia container format.
It allows many codecs, but dominantly sorenson video 2 and 3.
File extensions are .mov and .qt
* AVI (Audio Video Interleave)
A multimedia container format developed by Microsoft. It is not associated
with any particular codec. File extension is .avi
* Windows Media Video (WMV)
A series of video codecs and coding formats developed by Microsoft.
File extension is .wmv
---------
| Mplayer |
---------
mplayer is a command line tool for playing videos on Linux and many other
platforms.
mencoder is an accompanying command line tool for encoding from one mplayer
playable format to another.
gmplayer is a GUI frontend to mplayer.
The three share the same man page (which is nearly 10K lines long).
Note, when installing mplayer from a repository, gmplayer may not necessarily
be bundled with the package.
Mplayer accepts control commands from the keyboard, mouse and joystick.
For example
LEFT arrow ......... Seek backward 10 seconds
RIGHT arrow ........ Seek forward 10 seconds
UP Key ............. Seek forward 1 minute
DOWN Key ........... Seek backward 1 minute
PgUp ............... Seek forward 10 minutes
PgDn ............... Seek backward 10 minutes
[ .................. Decrease current playback speed by 10%
] .................. Increase current playback speed by 10%
{ .................. Halve current playback speed
} .................. Double current playback speed
BACKSPACE .......... Reset playback speed to normal
SPACE .............. Pause/Resume
q or ESC ........... Quit
These and others are all configurable via the input.conf file (see below).
Refer to the man page for a complete list of controls.
$ man mplayer
Mplayer has numerous options. Some examples
-gui ............... Enable GUI interface
-nogui ............. Disable GUI interface
-flip .............. Flip video upside-down
-rootwin ........... Show video in root window (note, desktop background in
some desktop environments may obscure the video)
-h ................. Help
To apply a video filter
$ mplayer -vf filter_type file
For example, to rotate by 90 clockwise and flip
$ mplayer rotate video_file
or
$ mplayer rotate=0 video_file
Other rotations:
rotate=1 ............ Rotate by 90 degrees clockwise
rotate=2 ............ Rotate by 90 degrees counterclockwise
rotate=3 ............ Rotate by 90 degrees counterclockwise and flip
Other filters include "flip", "scale", "size" and many more.
For more about video filters search man page for "VIDEO FILTERS".
Can also apply audio filters. Search man page for "AUDIO FILTERS" for more
about that.
Mplayer and mencoder can be configured via configuration files:
* System wide configuration is /etc/mplayer/mplayer.conf
* Keyboard/mouse/joystick bindings are configurable via /etc/mplayer/input.conf
* Menu configuration file is /etc/mplayer/menu.conf
* User specific configuration files ~/.mplayer/config, ~/.mplayer/mencoder.conf
Mplayer also supports profiles (search "PROFILES" in man page).
--------
| FFmpeg |
--------
FFmpeg is a suite of command line tools for manipulating and displaying MPEG
format videos and streams.
The following commands comprise this suite:
ffmpeg, ffmpeg-all, ffplay, ffprobe, ffmpeg-utils, ffmpeg-scaler,
ffmpeg-resampler, ffmpeg-codecs, ffmpeg-bitstream-filters, ffmpeg-formats,
ffmpeg-devices, ffmpeg-protocols, ffmpeg-filters.
The tool ffmpeg is a very fast video and audio converter. It can read from a
file or grab from a live audio/video source. It converts sampling rates and
can resize video on the fly. To learn more about it refer to the man page
$ man ffmpeg
To convert from Quicktime format (.mov) to .mp4 format
$ ffmpeg -i mymov.mov -vcodec h264 -acodec mp2 mymov.mp4
The following demonstrates how to extract a portion of a video.
$ ffmpeg -ss "00:00:27" -i invideo.mp4 -codec copy -t "00:00:20" outvideo.mp4
The source video is invideo
-ss specifies the start time of the clip you wish to extract, which in the
example is 27 seconds into the video.
-t specifies the duration (i.e. 20 seconds)
The output video is outvideo
You crop your video as follows:
$ ffmpeg -i invideo -filter:v "crop=out_w:out_h:x:y" -c:a copy outvideo
For example
$ ffmpeg -i invid.mp4 -filter:v "crop=20:40:100:200" -c:a copy outvid.mp4
You can also use built-in relative parameters.
For example in_w and in_h specify the width and height of the input video,
respectively.
In the following example the video is croped to retain only the top half
$ ffmpeg -i invid.mp4 -filter:v "crop=in_w:in_h/2:0:0" -c:a copy outvid.mp4
The 0:0 pair specifies the top-left coordinate of the cropping rectangle, and
the in_w:in_h/2 pair specifies its width and height, respectively.
For more about cropping and other available filters see the man page
$ man ffmpeg-filters
For some examples of cropping refer to this StackExchange link.
The tool ffplay is a simple video/audio player (with many options).
The following are some useful key bindings when playing a video
q, ESC ......... Quit
f .............. Toggle full screen
p, SPC ......... Pause
m .............. Toggle mute
9, 0 ........... Decrease and increase volume respectively
s .............. Step to the next frame
LEFT/RIGHT ..... Seek backward/forward 10 seconds
DOWN/UP ........ Seek backward/forward 1 minute
PgDown/PgUp .... Seek to the previous/next chapter or 10 minutes
For a full list of controls search "While playing" in man page.
-----------------------------------
| Other video playback applications |
-----------------------------------
* hxplay
* xanim
An X Windows utility to view videos.
* totem
A GTK+ video player.
This is usually the default video player in GNOME.
Make sure to add packages: gstreamer1-libav and gstreamer1-plugin-openh264
in order to play mp4 videos.
* VLC Player
A GTK+ video player. Lots of menu options.
Make sure to add packages: gstreamer1-libav and gstreamer1-plugin-openh264
in order to play mp4 videos.
* xine
An X Windows utility to view videos.
* gxine
GNOME interface to xine libraries.
* lqtplay
A command line utility for viewing quick time (.MOV) movies.
Refer to its man page for more.
--------
| Webcam |
--------
guvcview is a useful GTK+ program to operate a webcam.
-----------------
| Webcam loopback |
-----------------
I experimented once with webcam loopback. After searching the internet I came
accross this YouTube video which was very helpful in setting up what I needed.
I offer here a summary of the steps to get this to work, so here it is.
Webcam loopback in this context means taking a still image and causing it to
appear to the system as though a webcam is producing this image. I did this
mainly for use with an Android simulator. I wanted a certain App to receive
a video stream as though I was pointing the simulated phone's webcam at a
certain image.
These are the steps I followed:
(1) Installing 4l2loopback
$ wget https://github.com/umlaeute/v4l2loopback/archive/master.zip
$ unzip master.zip
$ cd v4l2loopback-master
$ make
$ sudo make install
$ sudo modprobe v4l2loopback
$ lsmode | grep v4l2 # check that mod was installed by modprobe
# Using ffmpeg to play in looback mode (onto device /dev/video0)
$ sudo ffmpeg -re -i mustsee.wmv -map 0:v -f v4l2 /dev/video0
(2) Creating a still image video stream with ffmpeg
Here are some example invocations of ffmpeg which I experimented with to
achieve this:
$ffmpeg -loop 1 -i img.png -c:v libx264 -t 15 -pix_fmt yuv420p -vf scale=320:240 -f v4l2 /dev/video0
$ffmpeg -loop 1 -i img.png -r 0.2 -t 1500 -vf scale=320:240 -f v4l2 /dev/video0
$ffmpeg -loop 1 -i img.png -map 0:v -tune stillimage -r 0.2 -t 1500 -video_size 1280x720 -f v4l2 /dev/video0
(3) First I need to load the v4l2loopback module (if not installed see
instructions above for installing)
$ sudo modprobe v4l2loopback
# Stream should be fed into /dev/video0
# Create stream for image img.png
$ sudo ffmpeg -loop 1 -i img.png -r 0.2 -t 1500 -vf scale=320:240 -f v4l2 /dev/video0
or
$ sudo ffmpeg -loop 1 -i img.png -tune stillimage -r 0.2 -t 4500 -vf scale=1280:960 -f v4l2 /dev/video0
(Note, scale=xdim,ydim, these dimensions have to be valid dimensions.
i.e. multiples of 320:240)
Note, img.png must be a color image. Will not work otherwise.
# change permissions of /dev/video0 to 666
$ chmod 666 /dev/video0
# Check that stream is working properly
$ ffplay /dev/video0
(4) Launch Android emulator and its camera should be seeing the video stream.
(See Section Android SDK for more about Android device simulators).
$ /home/jdoe/Android/Sdk/emulator/emulator -avd Nexus5v7 -camera-back webcam0 -no-snapshot-load -no-snapshot-save
Note, the option "-no-snapshot-load" causes the emulator to boot afresh.
For some reason this is necessary for the camera to be identified correctly.
I.e. camera is apparently not plug and play.
The option "-no-snapshot-save" tells the emulator not to save the state,
since anyway we will do a fresh boot.
Use the camera App to view the video stream.
Can also feed a moving video stream to the emulated camera.
Here is an example of creating a stream for a motion video titled
"myvideo.wmv"
$ sudo ffmpeg -re -i myvideo.wmv -map 0:v -f v4l2 /dev/video0
Octave
********************************************************************************
* - Octave -
********************************************************************************
Octave is a largely Matlab compatible numeric and scientific scripting and
simulation tool. See the Octave hompage for more about it.
Octave can be invoked with a graphical interface
$ octave
or without
$ octave --no-gui
For a brief description of Octave and a list of options see its man page
$ man octave
---------------
| Help features |
---------------
Octave has extensive built-in help facilities.
For help with general commands type (at the Octave shell prompt)
$ help command_name
e.g.
$ help plot
Another help facility is the doc command.
$ doc
This invocation will display a table of contents, from which you may pick a
topic of interest (e.g. Data Types, Functions and Scripts, Linear Algebra, etc.)
The doc command uses Info Reader to present the help documentation.
Type "H" to get a list of key bindings to help navigate the documentation.
For help with a particular command
$ doc command_name
e.g.
$ doc plot
Note, the help and doc commands are different help features.
Invoking help brings up a description as well as usage syntax, and possibly,
examples for the given function.
Invoking doc for a particular function will bring up the same in the context
of the topic under which the function is categorized. As such, it is possible
to scroll up and down to the description of other functions that are in the
same category.
----------
| Packages |
----------
Octave is modular. That is, aside from its built-in functions, it offers
inhouse and third party packages. Some flavors of Linux may split the Octave
bundle into parts, whereby you can initially install from the repository the
core Octave package, and additional packages as needed.
Some third party packages may not be available from the repository, in which
case they should be downloadable from Source Forge.
To directly install an Octave package from Source Forge, invoke Octave as root:
$ sudo octave
At the Octave prompt, type:
$ pkg install -forge package_name
e.g.
$ pkg install -forge odepkg
-------
| Sound |
-------
Octave can play and manipulate sound data.
See sound utilities in documentation.
Mathematical and Graphing Utilities
********************************************************************************
* - Mathematical and Graphing Utilities -
********************************************************************************
GNU's plotutils replaces UNIX's graph, spline and ? utilities.
* GNU graph plots 2-D datasets or data streams in real time.
It's designed for command-line use and can, thus, be used in shell scripts.
It produces output on an X Window System display as well as in the following
formats: SVG, PNG, PNM, pseudo-GIF, WebCGM, Illustrator, Postscript, PCL 5,
HP-GL/2, Fig (editable with the xfig drawing editor), ReGIS, Tektronix,
GNU Metafile format.
Output in Postscript format may be edited with the idraw drawing editor.
Idraw is available in the ivtools package from Vectaport, Inc.
Both xfig and idraw are free software.
* GNU plot translates GNU Metafile format to any of the other formats.
To launch
$ gnuplot
* GNU tek2plot translates legacy Tektronix data to any of the above
formats.
* GNU pic2plot Translates the pic language (a scripting language for
designing box-and-arrow diagrams) to any of the above formats. The pic
language was designed at Bell Laboratories as an add-on to the troff text
formatter.
* GNU plotfont displays character maps of the fonts that are available
in the above formats.
* GNU spline does spline interpolation of data. It normally uses
either cubic spline interpolation or exponential splines in tension, but
it can function as a real-time filter under some circumstances.
* GNU ode numerically integrates a system consisting of one or more
ordinary differential equations. ODE stands for "ordinary differential
equation".
See also section on Octave.
SPICE
********************************************************************************
* - SPICE -
********************************************************************************
SPICE is an open source circuit simulator developed at the University of
California Berkeley and has been around since the 1970's. It is the basis for
many of today's modern open circuit simulators, such as ngspice, as well as
commerical circuit simulators.
For more, see Wikipedia article.
An open source successor to SPICE is ngspice. It should be available in
your distro's repository. To install in Fedora:
$ dnf install ngspice
SPICE can be run in interactive mode inside a terminal, in which case you
specify circuit elements and their layout via commands, just as one would enter
commands in a shell. Simulation commands are entered in a similar fashion.
Analysis plots were made up of ASCII characters and didn't have the fine
appearance of the output of modern graphing utilities. SPICE3 added
X Windows plotting capability.
SPICE can also be run in batch mode, whereby SPICE is run via the command line
with a circuit file name as its argument.
Circuit files have extension ".cir".
A short overview of the syntax of this file follows. For insance
mycir
* The first line of .cir file is name of circut.
* The last line of a .cir file must be
.END
* Comments start with "*"
* Scale factors:
T = 1E12; G = 1E9; MEG = 1E6; K = 1E3;
MIL = 25.4E-6
M = 1E-3; U = 1E-6; N = 1E-9; P = 1E-12; F = 1E-15
To process a file (in batch mode):
$ ngspice -b file.cir > a.out
Note, the output is redirected to a.out.
-------------------------
| Interactive Interpreter |
-------------------------
The interactive interpreter is like a programmable shell - it accepts
commands, can define variables, source scripts, and may be instructed
to perform circuit simulations. See help on the "interactive interpreter"
in the program itself for complete details.
Some illustratives examples to help you get a feel for the interactive
interpreter (note, the "$" is used here to indicate ngspice's prompt symbol):
* To load a spice circuit
$ circuittorun.cir
* To list it
$ listing
* To run it
$ run
* To display a summary of currently defined variables
$ display
$ display varname1 varname2 ...
* To print a voltage or current (use display to see which voltage and current
vectors are presently defined)
$ print V(1) v1#branch
* Printing all variables
$ print all
* To define a new vector or assign a value to assign a value to it
$ let a
$ let a=5
To undefine a vector
$ let a = []
* Setting a variable to control spice or nutmegs behavior
$ set varname ...
TeX
********************************************************************************
* - TeX -
********************************************************************************
TeX is an open source digital typesetting system created by Donald Knuth in
1978. It is best known for its ability to produce high quality mathematical
typeset documents, although it has been used to produce non-mathematical texts
as well. It is widely used in academia, scietific journals, and by publishing
houses.
TeX has undergone several revisions. The author encourages developers to use
the TeX source code in developing variants of TeX and extensions. Many in-house
variants of TeX have been developed by publishing companies to typeset some of
their books.
Unlike most typesetting systems, TeX does not offer a graphical interface.
The TeX engine processes text files containing both the text to be typeset
together with embedded formatting commands.
A simple example is:
This is my first TeX document!
\bye
The command "\bye" is there to tell the TeX engine to stop processing the
document.
To compile this document, save as sample.tex, and compile with the tex command
$ tex sample
To display the output, use the xdvi command:
$ xdvi sample
To produce a typeset PDF document use pdftex
$ pdftex sample.tex
The output file will be named "sample.pdf".
A document containing mathematical formulas might look something like this
The Taylor expansion of the sine function is:
$\sin(x) = x - {x^3 \over 3!} + {x^5 \over 5!} - \cdots
= \displaystyle \sum_{i=0}^{\infty} { (-1)^i \over (2i+1)! } x^{2i+1}$
\bye
Formatting directives and TeX commands always begin with a backslash character
("\"). For example the directive "\displaystyle" instructs TeX to display the
formula text that follows as it would appear on a line of its own (e.g. place
the summation limits underneath and over the summation symbol). Otherwise the
formula would appear as an inline formula would (i.e. styled to be more
vertically compact).
The most powerful feature of TeX is the ability to define macros. Simple or
complex typesetting sequences can be grouped into a macro, and repeatedly
applied in the document, thus, saving on typing. Macros also accept arguments
making them even more powerful. In fact, the popular LaTeX typesetting system
(see next section) is simply a TeX macro layer.
An example of defining and using a macro in a tex file:
\def\myname#1#2{My name is #1. My family name is #2.}
\myname{Joe}{Smith}
\myname{Jill}{Becker}
\bye
Will typeset a document whose fromatted content looks something like this:
My name is Joe. My family name is Smith.
My name is Jill. My family name is Becker.
In the macro definition #1 denotes the first argument and #2 the second
argument.
TeX also supports flow control and a looping construct.
------------------------
| Miscellaneous commands |
------------------------
\hbox{} - place contents of {} in a horizontal box
\vbox{} - place contents of {} in a vertical box
\kern5pt - adds 5 points of empty space
\rlap{something} - types something without advancing typesetting point
------------
| Tex output |
------------
By default, TeX, in compiling a document, creates a DVI (DeVice-Independent)
file, whose file extension is ".dvi". The DVI file consists of instructions
and information on how to display or preview a document generated by TeX or
LaTeX. The DVI file is displayed by a DVI viewer such as xdvi.
The DVI file, however, does not contain non-TeX graphics images (e.g.
postscript, PDF, jpeg, gif), or, say, graphics generated by the \special command
(see this link for more about it).
As such, a DVI viewer by itself, will not display such graphics. However, DVI
viewers know how to call ghostscript to render postscript graphics (see here
for more about postscript).
For other types of graphics a specific device driver is necessary to process
them.
For more about DVI click here.
With a DVI file at hand, it is possible to generate a postscript document using
the command:
$ dvips [options] filename
Some of the options are:
-x magnification
-t landscape
-p starting page
-l ending page
-n number of pages to print
-pp comma separated page list
-o name of output file
For example
$ dvips -o mydoc.ps mydoc
will produce (barring errors) the postscript file mydoc.ps.
Note, the .dvi file extension can be omitted in specifying the DVI file.
All images included in the document should be of type encapsulated postscript (eps).
See this section for various tools that can be used to convert from one image
type to another.
With a postscript file in hand, generating a PDF document is simple:
$ ps2pdf mydoc.ps
To have TeX (or LaTeX) generate a PDF document directly, use the pdflatex
command
$ pdftex mydoc
This command will output mydoc.pdf (rather than mydoc.dvi).
Yet, another way to generate PDF documents is with the command dvipdf or
dvipdfm or xdvipdfmx. Use this if for whatever reason you don't have pdf
images available or using epic specials. Of the three, the first one has
worked best for me (at this juncture).
------
| Xdvi |
------
The default previewer for TeX/LaTeX documents is xdvi. The basic invocation
is:
$ xdvi mydoc
xdvi accepts command line arguments. A few of them are:
-expert (x) expert mode
-keep should keep relative position in page when paging through document
-margins dimen (M) specifies "home position"
-paper papertype (e.g. us, legel, a1-17, etc...)
-copy use this option if xdvi is rendering text very slowly (see man page)
(See man page for more).
To navigate a document displayed by xdvi it's useful to know some key bindings.
This is a partial list (see man page for more):
n next page
p previous page
g move to page of given number, if no number is given it goes to last page
(actual page numbers)
P move to page of given number, if no number is given it goes to last page
(absolute page numbers)
^ moves to home position of page
k moves 2/3 up
j moves 2/3 down
h moves 2/3 left
l moves 2/3 right
c centers window around cursor
K toggle keep option
x toggle expert option
G toggle grayscale anti-aliasing option
D toggle grid
v toggle postscript rendering
V toggle postscript grayscale
^F reads new dvi file
m, U, a, A, <, >, ,, " page marking features
t setpaper menu
---------------------------
| How TeX treats characters |
---------------------------
It is important to realize that TeX knows nothing about bitmap images, and that
includes the bitmap images of the characters that make up the document. If so,
how does TeX lay out and format the characters in your document? The answer in
short is it has access to font metric files which contain all the information
needed to let TeX know how much space a given character (in the given font)
occupies, and how it should align with respect to other characters on the same
line. The basic parameters are:
* reference point - a point on the character baseline marking the character's
left boundary.
* width - total width of imaginary box enclosing character
* height - height of character from baseline to top boundary of enclosing box
* depth - height of character from baseline to bottom boundary of enclosing box
For example the characters "B" and "y" are shown below illustrating how these
parameters would be defined for each, and how they would align with each other
on a line.
width
---- ---
|BBB | | ------ ---
|B B| | |y y| |
|BBB | height | y y| height
|B B| | | y y | |
ref .BBB | | ref . y | --- ------------ baseline
---- --- | y | depth
| y | |
------ ---
TeX treats each character as an imaginary box to be arranged on the
page. Thus, when compiling a document, TeX first consults the metric file of
the given font to obtain the above noted positional/dimensional parameters for
the given character, and then proceeds to arrange the boxes (that will
ultimately contain the bitmap images of the respective characters) in accordance
with the formatting instructions provided by TeX and the author's input file.
The TeX compiler outputs a dvi file which contains instructions on how to
position the boxes, the character contained in each box, and a reference to the
font(s) in use. It should be noted that the dvi file does not embed the font
itself in the dvi file, and, therefore the font files must already be installed
on the system one is using if one is to render and display the dvi file or
convert it to postscript.
The dvi language also makes allowance for embedding third party graphics formats
using extensions called specials, mostly notably those designed to incorporate
postscript into the document. In TeX, the command \special is used to access
these extensions.
The dvi file is intended to be read by rendering software (e.g. xdvi, dvips).
The rendering software uses the instructions in the dvi file and the font metric
files to position characters (and graphics), and the font files to render the
characters themselves.
-----------------
| Further reading |
-----------------
* Wikipedia Article
* TeX command reference
* TeX showcase
* The TeXbook, by Donald Knuth.
The TeX User Group (TUG) is a rich source of information on TeX (and LaTeX).
LaTeX
********************************************************************************
* - LaTeX -
********************************************************************************
LaTeX is an open source document typesetting program written using TeX macros.
The original author, Leslie Lamport, released version 2.09 before handing over
the project to Frank Mittelbach, who together with others formed the LaTeX3
team. The successor version, LaTeX2e, was released in 1994 by the LaTeX3 team.
Latex3 is curently in the development stage.
LaTeX is based on TeX, and uses TeX's powerful macro feature to simplify
typesetting syntax. A few of its many features are:
* Packaging system.
Latex comes bundled with numerous packages. You can find packages for almost
anything. There are packages for doing something as simple as suppressing
page numbers (nopageno) or producing a well formatted € symbol (eurosym),
or more complex packages for graphics rendering (epic, eepic and pstricks),
electronic circuit sketching (e.g. circ, circuitikz), creating music sheets
and compositions, creating state machine and data flow diagrams, and much
more. See the CTAN website for more.
The location of LaTeX packages on your installation varies depending on the
distribution. For me its /usr/share/texlive/texmf-dist/tex/latex.
* Automatic sectioning.
How one sections a document greatly impacts its readability. Journal articles
are usually divided into sections and subsections. Books are divided into
chapters, sections, subsections and sometimes sub-subsections. Sometimes a
book may be divided into two or more parts.
Using special commands LaTeX automatically handles the formatting of the
various levels of sectioning.
* Automated table and figure numbering and indexing.
LaTeX automatically numbers figures and tables in the document using internal
counters. Therefore, adding a figure or table doesn't require manual
renumbering by the author. Since LaTeX keeps track of all figures and tables,
generating a page listing all figures or tables in the document is a matter
of issuing a single command.
* Automatic generation of a table of contents section.
With a single command a full table of contents is generated.
* Sophisticated bibliographic system using BibTeX.
BibTeX includes biliographic templates for different types of citations
(e.g. book, article, internet source, custom, etc.). Compiling the document
with BibTeX generates a full bibliographic listing of cited references. In
the LaTeX document, citations are numbered automatically by the LaTeX engine.
Some professional societies, such as IEEE, provide prepared BibTeX
bibiliographic entries which you can cut and paste into your BibTeX file.
* Headers, footers and margin notes.
LaTeX provides basic support for these features. Packages such as fancyhdr
expands on the basic capabilities of these.
LaTeX produces high quality mathematical typeset documents. As such it is very
popular in the scientific academic community.
LaTeX was designed to allow the writer to focus on ideas and writing rather
than the particulars of formatting. Indeed, articles and books produced with
LaTeX come out looking very professional with minimal effort on the part of the
author.
In the following subsections I describe in brief the LaTeX system and what it
takes to get started writing a document. If you intend to use LaTeX regularly
I highly recommend Leslie Lamport's book "A Document Preparation System".
The "LaTeX Companion" (1993 edition) provides a more in depth look at LaTeX and
describes many useful packages. There are also many useful internet resources
on LaTeX.
--------------------
| The LaTeX preamble |
--------------------
Before commencing, I'll note that in LaTeX comments are preceded by the "%"
character. Anything after the percent symbol till the end of the line is
ignored. Spacing in the beginning of the line is also ignored.
Before writing anything in LaTeX it is necessary to tell LaTeX a few things
about the document, and the packages you wish to use. All of these are
included in the preamble (header) section of the LaTeX document.
* The first thing to do is specify the type of document. For example, a book,
an article, etc. In LaTeX, the document type is referred to as "class".
Why does LaTeX need to know this? Because LaTeX will format the document
in accordance with the kind of document it is. This will affect which
sectioning commands and features will be made available (i.e. the style used
in section headings, the structure and style of the title, and so forth).
For example, the book class allows the document to be sectioned into parts,
as well as include various front matter material (e.g. dedication, list of
figures) and back matter material (e.g. index).
The LaTeX command for specifying the class is:
\documentclass[options]{class}
The standard classes that come with LaTeX are:
* article
* report
* book
* letter
* slides
Custom classes are available as well, or you can write your own.
See this webpage for more.
For example declaring the document to be an article in twelve point type:
\documentclass[12pt]{article}
Here is a list of commonly used options:
* Font size: 10pt, 11pt, 12pt
* Paper size and format: a4paper, letterpaper
* Draft mode: draft
* Multiple columns: onecolumn, twocolumn
Note, the multicol package is available for mixing single and mulitcolumn
formatting. See below on loading packages.
* Formula-specific options:
* Left-alignment of formulas: fleqn
* Labels formulas on the left-hand side instead of right: leqno
* Landscape print mode: landscape
* Single-sided and double-sided documents:
* The left and right margins are symmetric and headers are exactly the same
on every page: oneside
* The above properties are treated differently: twoside
* Titlepage behavior: notitlepage, titlepage
* Chapter opening page: openright, openany
* The second thing to include in the preamble are a list of packages to load.
Simple documents may not need any, though more complex documents are likely
to need them, or may at least benefit from such pacakges.
The command to load a package is
\usepackage[options]{packagename}
For example, this part of the preamble may look something like:
% math stuff
\usepackage{amsmath} % AMS math symbols
\usepackage{amssymb} % Additional AMS math symbols and fonts
% graphics
\usepackage{graphicx} % Enhanced support for graphics
\usepackage{epic} % Extends LaTeX picture mode
\usepackage{color} % Provides color control for text and graphics
% miscellaneous
\usepackage{array} % Extends array and tabular environments
\usepackage{hhline} % Extensions for borders in tabular environment
Visit the CTAN website where thousands of packages are available for download.
Note, different repositories bundle LaTeX components differently. This is
particularly true of LaTeX packages. Some repositories allow more fine
grained inclusion of packages, whereas others may only provide a few "bundle"
options. Individual packages can always be installed from the CTAN website.
* The third thing to (optionally) include in the preamble are custom commands
or macros (although, technically, such commands can be defined anywhere in
the document).
The \newcommand command is used to define a LaTeX command.
The general form is:
\newcommand[numargs]{\macroname}{content}
* [numargs] specifies number of arguments (e.g. [1], [2], etc.).
Leave it out if the macro takes no arguments.
* The command content includes the macro definition, which consists
of text and/or commands to be substituted in place of the macro. Custom
macros can be nested within other custom macros (but not recursively).
The content argument can occupy more than one line. It can be as simple
as a shortcut for typing out a text sequence, or a complex command with
tens or more lines of text and commands.
Here are a few examples of custom macros defined with \newcommand:
\newcommand{\myname}{Adam Smith}
\newcommand{\partwo}[2]{#1\!\parallel\!#2}
\newcommand{\parthree}[3]{#1\!\parallel\!#2\!\parallel\!#3}
\newcommand{\quadform}[3]{\frac{-#2\pm \sqrt{#2^2-4(#1)(#3)}}{2(#1)}}
The last example demonstrates how it's possible to nest braces inside the
outer braces containing the macro definition.
I can subsequently use these predefined macros in the document (see subsection
Typesetting mathematics).
In my documents I often make use of hundreds of macros. In order to avoid
cluttering up my preamble with macro definitions I created a dozen or so
macro files (divided up thematically). I then use the \include command to
load up the ones relevant to the document at hand.
For example, in a slides presentation on semiconductor devices I included the
following macro files:
\include{shortcuts}
\include{coursestuff}
\include{mathlib}
\include{unitlib}
\include{electriclib}
\include{myflags}
\include{slidestuff}
\include{mygraphics}
* The fourth thing to (optionally) include in the preamble are various custom
formatting directives.
For example:
\setlength\textwidth{0in} % Width of text in page (excluding margins)
\setlength\parskip{0in} % Extra vertical space in between paragraphs
\setlength\parindent{0pt} % Indentenation space at beginning of paragraph
% (zero value supresses indentation)
----------------------
| The document content |
----------------------
After the preamble, the command \begin{document} marks the beginning of the
written document. The end of the document is marked by the command \end{document}.
LaTeX ignores anything that follows.
A simple example:
\begin{document}
% Title
\begin{center}
\huge The LaTeX Howto
\normalsize by \\
\large Joe Smith \\
\normalsize \today \\ % Typeset today's date
\end{center}
LaTeX is an open source document typesetting program written using TeX macros.
The original author, Leslie Lamport, ...
\end{document}
-------------------------
| Typesetting mathematics |
-------------------------
The power of LaTeX can truly be appreciated when typesetting mathematical
content.
The formula environment is used to format equations, and is enclosed by
the commands \begin{equation} and \end{equation}.
An example of typesetting the equation y = kx2 + y0
\begin{equation}
y = kx^2 + y_0
\label{line} % label is optional
\end{equation}
Equations inside a formula environment are numbered automatically. To suppress
numbering add an asterisk to the environment name (i.e. \begin{equation*} and
\end{equation*}). Equivalently, use the command pair \[ and \].
Numbering can also be suppressed by placing the \nonumber command before
\end{equation}.
When a formula is numbered, it's possible to reference the formula using the
\ref command. For example, the LaTeX input line
Equation \ref{line} represents a line in two dimensions.
will insert the formula number where it is referenced, producing typeset text
that looks something like this:
Equation 2.3 represents a line in two dimensions.
Note, if for a given formula numbering is suppresed, the \label and \ref
commands will not be useful for that formula.
The underscore ("_") character in a formula environment tells LaTeX to take the
next character (or set of characters when enclosed in braces) as a subscript.
Similarly a superscript is denoted by the "^" symbol.
The formula environment places the formula on a separate line from the
surrounding text. For an in-line formula (or mathematical expression) enclose
the formula with two $ characters, or with the command pair \( and \).
For example the following LaTeX input
The equation for a line is $y = kx^2 + y_0$, whereby $k$ represents the slope,
and $x_0$ the y-intercept.
will be formatted as such:
The equation for a line is y = kx2 + y0, whereby k represents the slope, and
y0 the y-intercept.
The use of macros can be very beneficial, in particular when typesetting
mathematics where the formatting tends to be complex and likely to contain
repeated constructs. In the following example I make use of the macros defined
earlier (see subsection Preamble).
My name is \myname. In this lecture I am going to teach how to
compute the equivalent resistance of two or more resistors in parallel.
For two resistors in parallel the equivalent resistance is computed as such:
\begin{equation}
R_{\mathrm{eq}} = \partwo{R_1}{R_2}
= \frac{1}{\frac{1}{R_1} + \frac{1}{R_2}}}
= \frac{R_1R_2}{R_1+R_2}
\end{equation}
The equivalent resistance of three resistors in parallel may be computed
recursively, as such:
\begin{equation}
R_{\mathrm{eq}} = \parthree{R_1}{R_2}{R_3}
= \partwo{\left(\partwo{R_1}{R_2}\right)}{R_3}
\end{equation}
In the LaTeX input above the \myname will be exanded so that the text reads:
My name is Adam Smith. In this lecture ...
The first formula to appear will be formatted something like this:
1 R1 R2
Req = R1 || R2 = ------- = -------
R1 + R2 R1 + R2
Some commonly used math operators and symbols:
* \pm provides +/- symbol
* \mp provides -/+ symbol
* \| provides parallel (||) symbol
The internet is full of webpages and documents containing listings of LaTeX
symbols. Some listings are shorter, some longer. Here are a couple:
* Wikipedia
Contains a very useful collection of symbols, including commands to generate
calligraphic symbols and other forms used in mathematical notation.
* Comprehensive LaTeX Symbol List (pdf file).
This is probably the most comprehensive collection of LaTeX symbols. As of
2020 it stands at 348 pages.
When typesetting trigonometric functions and other named functions in math
mode, the unaware author will type something like y=sin x
.
This, however, will produce improperly formatted output. Firstly, just as the
x and y will be italicized (as they should), so too will "sin", which shouldn't
be because it denotes a function name.
But, worse, the spacing between the letters "s", "i" and "n" will appear
unnatural. TeX is expert at producing proper spacing, however, spacing is
context specific, and within a mathematical formula, it is important for TeX to
know what is a function, and what is simply a product of variables. In this
example, TeX will treat "sin" as the product of the variables s, i, n,
and will space the letters accordingly. Furthermore, the spacing between the
function name and its argument (x) will be incorrect.
To remedy this, LaTeX provides pre-defined command macros for typesetting
commonly used functions. For example the commands \sin, \cos, \arcsin, \log
will simply produce sin, cos, arcsin, log, however, they will be formatted as
function names (i.e. non-italicized and uniform spacing). In the example
above, the correct LaTeX input should be y=\sin x
When a command to typeset a particular function is unavailable or does not
exist, define your own using the \operatorname command. For example:
\newcommand{\sgn}{\operatorname{sgn}}
The command can then be used in math mode as such:
y = \sgn x
.
Finally, I touch upon using parenthesis in equations. Digital typesetting
has come a long way since the typewriter. The typerwriter produced only one
size of parenthesis, bracket or braces. LaTeX, in contrast can produce
parenthesis of variable size. For example the equation
\[ \displaystyle y(t) = \sin \left( t + \frac{\pi}{2} \right) \]
will be formatted as (pardon my poorman's attempt at enlarged parenthesis)
/ π \
y(t) = sin ( t + --- )
\ 2 /
Similarly, variable sized brackets are typeset with \left[ \right]
and braces
with \left\{ \right\}
.
A delimiter can be made to any length by applying \leftX and/or \rightX
Delimiters must come in pairs, but need not be a pair of the same type.
The empty delimiter is \left. and \right.
On a final note, it is interesting to observe that when you copy a mathematical
formula from Wikipedia and then paste it in a text editor or terminal you'll see
LaTeX like code. This code is in fact how formulas are typeset in Wikipedia
articles. Refer to this Wikipedia article for more on how Wikipedia displays
mathematical formulas, and the connection to LaTeX.
There is alot more to say about typesetting mathematics in LaTeX, but I must
stop here, and defer you to the literature and internet resources for more.
Chapter 3 of Leslie Lamport's book "A Document Preparation System" provides a
good introduction to mathematical typesetting.
The folowing article is a 143 page document on math mode in LaTeX: Math mode
In professionally typeset documents the typesetter usually places figures and
tables at the top or bottom of the page in a place that is within reasonable
proximity to where the figure/table is referenced. LaTeX does just this.
To include a figure in LaTeX use the figure environment:
\begin{figure}
% Use a command to insert one or more pictures or drawings.
% e.g. \includegraphics{file}
\caption{Figure Caption}
\label{figlabel}
\end{figure}
LaTeX will automatically place this figure at the top or bottom of a page, on
the same or an adjacent page as the surrounding text. It will also
automatically number this figure. To reference the figure in the text use the
\ref command with the label provided by the \label command (e.g.
\ref{figlabel}).
The figure environment is a container for an image(s), text, or a picture
made with the picture environment (see further down in subsection Drawing in
LaTeX). To insert an image using the \includegraphics command see further
on in subsection Including images.
Tables are handled similar to figures.
\begin{figure}
% Use a command to create the table
% e.g. \begin{tabular} ... \end{tabular}
\caption{Table Caption}
\label{tablelabel}
\end{figure}
------------
| Sectioning |
------------
In Latex the following commands are used to section a document:
\part, \chapter, \section, \subsection, \subsubsection, \paragraph and
\subparagraph. Not all sectioning commands are applicable for a given class.
For example, the article class does not recognize the \part or \chapter
sectioning commands.
An example of using sectioning commands:
\begin{document}
\section{Introduction}
Some words of introduction.
\section{Topic 1}
Some words on this topic.
\subsection{Subtopic 1}
Some words on this subtopic.
\subsection{Subtopic 2}
Some words on this subtopic.
\section{Topic 2}
Some words on this topic.
\end{document}
The above LaTeX input will be formatted something like this
1 Introduction
Some words of introduction.
2 Topic 1
Some words on this topic.
2.1 Subtopic 1
Some words on this subtopic.
2.2 Subtopic 2
Some words on this subtopic.
3 Topic 2
Some words on this topic.
The behavior of the sectioning commands can be modified using various other
commands. For instance, it is possible to modify section numbering to use
Roman numerals (I, II, II.1 etc.).
To supress the numbering of a section, add an asterisk to the command name.
e.g. \section*{}, \subsection*{}
Internally, LaTeX keeps track of sections, subsections, etc. so that a table of
contents can be generated simply by typing the command \tableofcontents.
The table of contents will show up in the document where the command is issued.
Related commands are
* \appendix - Use this command to inform LaTeX that from this point on
sections are to be numbered in the manner reserved for appendices.
i.e. A, A.1, B, etc.
* \listoffigures - Create a table listing all figures in the document. The
text used to specify each figure defaults to its caption.
* \listoftables - Create a table listing all tables in the document. The
text used to specify each figure defaults to its caption.
------------------
| Drawing in LaTeX |
------------------
LaTeX (or TeX for that matter) does not have a graphics engine to generate
arbitrary images or vector graphics. It does, however, come with a set of
commands used to draw certain primitive shapes (lines, dashed lines, arrows,
rectangles, circles, ovals, and quadratic bezier curves). It does so by
piecing together characters from a font specifically designed for that purpose,
much in the same way that I could use certain ASCII characters to make a simple
drawing, like \o/
|
/ \
Of course the font character sets that LaTeX uses give more possible line
slopes, circle diameters and so forth.
To draw a horizontal line, LaTeX pieces together short horizontal lines (e.g.
----------). The line cannot be shorter than the length of a single horizontal
line element. To draw a vertical or sloped line it pieces together line
elements of the appropriate slope.
| / \
| / \
| / \
Not all slopes are possible; only those slopes contained in the font, which is
six for regular lines and four for lines capped with an arrow.
LaTeX's drawing font also comes with a selection of circles and disks (filled
circles). When specifying a radius LaTeX will find the closest fit to that
which is available in its fonts (this may differ from one installation to
another).
The picture environment is used to contain a drawing or diagram.
For example:
\setlength{\unitlength}{1cm}
\begin{picture}
% a horizontal line starting at coordinate (0,0) of length 5
\put(0,0){ \line(1,0){5} }
% place a circle of radius 0.5cm at coordinate (1.3,5.2)
\put(1.3,5.2){ \circle{0.5} }
% place ten "bullet" characters half a cm apart starting at coordinate (1,1)
\multiput(1,1){0.5,0){10}{\makebox(0,0){$\bullet$}}
\end{picture}
All unmarked lengths in the picture environment are in reference to \unitlength
set by the \setlength command. In the example above it was set to 1cm.
Other units of length which LaTeX is familiar with are: em, ex, in (inch), pc,
pt (point = 1/72 of an inch), mm.
If you wish to scale the coordinate system of a picture, simply change the
\unitlength definition. For example
\setlength{\unitlength}{1.5cm}
Note, however, that some attributes such as line widths, will not scale.
--------
| Colors |
--------
LaTeX supports colors using the color package.
* The basic pre-defined colors are:
black, white, red, green, blue, cyan, magenta, yellow
* Custom colors can be defined with the \definecolor package.
For example:
\definecolor{light-gray}{gray}{0.95}
* It is also possible to define a color using the CMYK model
\definecolor{Brown}{cmyk}{0, 0.8, 1, 0.6}
* Text can be colored as such \textcolor{color}{text...}
* A box can be colored as such \colorbox{background colour}{text}
* A box can be colored with separate frame and background color
\fcolorbox{frame colour}{background colour}{text}
* To set the background color of a page \pagecolor{colorname}
To place text in colored boxes load the tcolorbox package.
Usage example:
\begin{tcolorbox}[width=\textwidth,colback={green},title={With rounded corners},colbacktitle=yellow,coltitle=blue]
\blindtext[1]
\end{tcolorbox}
--------------------
| Including graphics |
--------------------
This subsection is about incorporating graphics produced by third party software
into a LaTeX document. When I say third party software I mean software that
produces graphics or images in formats that are not understood by TeX/LaTeX
which includes basically any graphics format other than that produced by the
picture environment described earlier in subsection Drawing in LaTeX.
As mentioned earlier, Tex/LaTeX know nothing about graphics. To TeX, including
an image in the formatted output is treated in the same manner as including a
character. Just as TeX must know the font metrics for a character when
including it, similarly it must know the image metrics (i.e. size and reference
point). TeX/LaTeX will then treat the incorporated image as an imaginary box to
be aligned according to the formatting instructions specified by the author.
The actual image will only be rendered outside LaTeX by the rendering software
(e.g. xdvi, dvips).
The command used to include a graphics object is \includegraphics.
\includegraphics{filename}
This requires including the graphics package in the preamble.
\usepackage{graphics}
When LaTeX can determine the size specification of the included graphics file,
it need not be specified explicitly. When the size specification is lacking or
LaTeX does not know how to obtain it, it must be specified explicitly using
optional arguments. For example
\includegraphics[3in,2in]{imagefile.eps}
Note, an the enhanced graphics package graphicx provides the same commands
as the graphics package, but differs in the format of optional arguments.
Follow this link for more.
If including images in a document compiled with pdflatex, the images should
first be converted to PDF, because that's what pdflatex looks for when it
encounters an image inclusion. Also note, pdflatex doesn't work with epic
specials. To find out more about command line options refer to the man page.
-----------
| Font size |
-----------
Font sizes in LaTeX are not specified explicitly character by character as in
word processors. Rather, a size is specified for the document as a whole, and
within the document LaTeX adjusts the sizes of characters or words as it sees
fit for a given context. For example, the font size for a subscript or
superscript will be smaller than that of the main text.
The font size for the whole document are be set by adding an option to the
\documentclass command.
10pt, 11pt, and 12pt are available for most classes.
For example, to set the article class to 10pt:
\documentclass[10pt]{article}<\code>
The extsizes package provides additional sizes: form 8pt to 20pt
To change sizes within the document you can use the following commands
\scriptsize
\footnotesize
\small
\normalsize
\large
\Large
\LARGE
\huge
The moresize package provides two additional size commands, namely
\HUGE and \ssmall.
For example
{\Large Hello \small there}
will typeset a large "Hello" followed by a small "there".
---------------------
| Including URL links |
---------------------
To insert clickable URLs in your document, sepcify the hyperref package in
the preamble: \usepackage{hyperref}
Within the document enclose URLs within a \url command. For example
\url{htts://myurl.com}
--------------
| LaTeX output |
--------------
Latex is a macro layer for TeX. Therefore, TeX is what actually produces the
output. You are thus referred to subsection TeX output for more about that.
To have LaTeX generate a PDF document directly, use the pdflatex command
$ pdflatex mydoc
This command will output mydoc.pdf (rather than mydoc.dvi).
------
| Xdvi |
------
The default previewer for TeX/LaTeX documents is xdvi.
See above for more.
--------------
| Hebrew Latex |
--------------
(The following was useful information when Hebrew support in LaTeX was in its
infancy. See subsection Culmus for that which comes in its place).
* The LaTeX command for compiling Hebrew documents was named elatex
This command supports a right-to-left mode.
* For generating PDF output, the command pdfelatex was used.
Nowadays simply invoke latex and pdflatex and they will automatically select
the correct program.
Packages provided with Hebrew Latex
* hebcal.sty
* hefont.sty
* hebrew_newcode.sty
* hebrew_oldcode.sty
* hebrew_p.sty
* setspace.sty
Hebrew font package "hebfont.sty" includes the commands:
\textjm{Text}, \textds, \textoj, \textta, \textcrml, \textfr, \textredis,
\textclas, \textshold, \textshscr, \textshstk
\jm, \ds, \oj, \ta
The Hebrew calender packages hebcal.sty includes commands
\Hebrewdate, \hebrewtoday, and more
Hebrew macros files: vowels.tex and others
----------------
| Culmus package |
----------------
The LaTeX Hebrew functionality described above has been superceded by the
culmus package. Some imrovements include:
* Additional fonts
* Improved formatting.
* Culmus works with the color package, whereas the above didn't.
Note, the most uptodate way of incoporating Hebrew (and other languages) is
with the XeLaTeX and the polyglossia package. See subsection "Polyglossia" for
more.
Of note, picins package required minor revisions to work with culmus. I added
/usr/share/texlive/texmf-dist/tex/latex/picins macros, which I obtained from CTAN.
To set up a LaTeX document to use Hebrew include the "hebrew" option in
\documentclass and include some packages, as in the example:
\documentclass[hebrew,english,a4paper]{article}
% packages
\usepackage[utf8x]{inputenc}
\usepackage[hebrew,english]{babel}
\usepackage{culmus}
To set the default input to Hebrew, use the \sethebrew command, and
to switch back use \unsethebrew.
\begin{document}
\sethebrew % Makes default Hebrew
שלום
\unsethebrew % Makes default English
Hello
\begin{document}
To temporarily switch from left-to-right to right-to-left direction or vice
versa, use \R{} and \L{} commands.
\begin{document}
\sethebrew
\L{In English Hello is} "שלום"
\unsethebrew
In Hebrew ``Hello'' is \R{שלום}.
\begin{document}
I have not been able to easily incporporate Hebrew vowels with this method.
More extensive support for Hebrew is available with XeLaTeX and the Polyglossia
package.
To install the culmus package, search for "culmus" in your repository.
You may also need to install the utf8x package, which handles the UTF-8
character encoding with BIDI support.
In Fedora:
$ dnf search culmus
-------------
| Polyglossia |
-------------
The Polyglossia package is an alternative to the Babel system for XeLaTeX and
LuaLaTeX, supporting multilingual typesetting. Read more about it here.
Note, this package does not work with LaTeX or pdfLaTeX.
Example: Creating a command for font switching.
\usepackage{polyglossia} % multilingual support package
\newfontfamily\hadasim{Hadasim CLM Regular}
\DeclareTextFontCommand{\txthadasim}{\hadasim}
In the document, Hebrew text using the font declared above (hadasim) may be
inserted with the command \txthadasim. For example:
\txthadasim{שלום}
Note, when including fonts as above, the font name should match the font file
name in /usr/share/fonts. For example, the font utilized above, "Hadasim CLM
Regular", is derived from the name of the font file "HadasimCLM-Regular.ttf".
I don't know how to specify non-ttf fonts. This essentially leaves out any
fonts in the texmf/fonts directories (e.g. /usr/share/texlive/texmf-dist/fonts)
as there are very few ttf fonts there. It may be this package doesn't support
non-ttf fonts.
I illustrate usage with another example
\documentclass{article}
\usepackage{polyglossia}
% settings
\defaultfontfeatures{Mapping=tex-text, Scale=MatchLowercase}
\setdefaultlanguage{english}
\setotherlanguage{hebrew}
\newfontfamily\hebrewfont[Script=Hebrew]{Hadasim CLM Regular}
\begin{document}
% To insert inline Hebrew text use the "\texthebrew" command
Hello \texthebrew{שלום}
% To insert a block of Hebrew text use Hebrew environment. Be sure to start
% Hebrew block on a new line (by leaving one line blank or using // command)
\begin{hebrew}
שלום
\end{hebrew}
\end{document}
See here for more on LaTeX and Hebrew.
----------------
| Fonts and NFSS |
----------------
NFSS is LaTeX's new fonts selection scheme.
The original TeX/LaTeX implementation had limited fonts, and commands that
referenced the default font were basically hard wired.
For instance, {\bf\sf ...}
(bf=bold face, sf=sans serif face) did not result
in a bold sans-serif font, but rather in just sans-serif.
NFSS was put into place to provide much more flexibility and transparency when
dealing with different fonts.
The original LaTeX fonts had only 127 glyphs. Accented symbols were made with
the \accent command. The encoding of these fonts is [OT1].
256 glyph fonts have built-in accented symbols, which are recommended for
non-English Latin languages such as French and German. Encoding of these fonts
is [T1].
Use \usepackage[T1]{fontenc}
to support this encoding.
TeX/LaTeX doesn't directly support Unicode. It's not that you can't generate
practically any character you want, its that you have to use command names
or font switching methods to display them, and each font contains not more than
256 glyphs. For example to display the greek symbol φ you use the command
\phi
(in math mode. i.e. $\phi$). TeX will load the necessary font and
display the correct glyph. The packages utf8 and utf8x allow LaTeX to interpret
correctly documents written in UTF-8 encoding. However, use of these packages
is limited and incomplete. For example, utf8 doesn't work with Hebrew while
utf8x does.
The variants XeTeX, XeLaTeX, on the other hand, support unicode natively, as
well as right-to-left scripts. LuaTeX and LuaLaTeX do as well. If preparing
multi-language or mixed language documents, it is recommended to use these
variants of TeX in conjunction with the polyglossia package (see above).
XeTeX and XeLaTeX produce pdf by default, although they can produce extended
DVI output.
Changing the default font
-------------------------
To change the default font use the \renewcommand.
For example to change default from roman font to ptm (Adobe Roman)
\renewcommand{\rmdefault}{ptm}
Any where in the document that the roman font is called for, adobe's Times font
will be used.
Similiarly to change the sans-serif and typewriter default styles to Helvetica
and Courier respectively:
\renewcommand{\sfdefault}{phv}
\renewcommand{\ttdefault}{pcr}
See fntguide.pdf for more.
See p. 37 in Lamport's LaTeX book about font styles.
Testing fonts
-------------
See subsection below.
---------------------------
| Additional LaTeX packages |
---------------------------
Other LaTeX packages of interest
* xcolor - Color extensions to color package.
Allows color tints, shades, tones, and more.
* figflow - Provides command
\figflow to allow inserting a figure in a paragraph.
* fp - Fixed point arithmetic
* slides - A package for producing slides.
See below for more.
-----------
| Slides |
--------
When making slides, you can use LaTeX's slide package.
Firstly, LaTeX needs to know to use the slides class.
\documentclass[landscape]{slides}
This class provides for enlarged fonts and a framework appropriate for a
document consisting of slides. Use "landscape" option for horizontally
oriented slides.
To create a slide, enclose slide content within \begin{slide} and \end{slide}.
For example
\begin{slide}
\begin{center}
Slide Title
\end{center}
\vspace{1in}
\begin{itemize}
\item Item 1 in my slide
\item Item 2 in my slide
\item etc...
\end{itemize}
\end{slide}
To obtain a more elaborate slide design, you can define your own slide command.
For example, I define the command \vug as follows:
\newcommand \vug[2] {
{
\setlength\unitlength{1cm}
\begin{picture}(0,0)
\thicklines
\put(26,9){\oval(2,2)[bl]}
\put(13,0){\oval(26,18)}
\put(1,7.5){\rule{24cm}{4pt}}
\put(25.5,8.5){\makebox(0,0){\tiny \authorsname}}
\placepageno
\put(1,7.5){\makebox(24,1.5){\Large #1 }} %--- Title
\put(1,6.5){\parbox[t]{\textwidth}{\large \sf #2}}
\end{picture}
}
\newpage
}
This command takes two arguments:
* Argument #1 is for the slide title
* Argument #2 is for the content
It uses the picture environment to add a boundary and properly place the slide
title and content in their respective locations.
Example usage:
\newcommand{\authorsname}{Jack Smith}
\vug{Tile of Slide}{
\begin{itemize}
\item Item 1 in slide
\item Item 2 in slide
\item etc...
\end{itemize}
The slide will be formatted as such:
---------------------
/ Title of Slide \
-------------------------
| * Item 1 in slide |
| * Item 2 in slide |
| * etc ... |
| |
| |
\ Jack Smith /
---------------------
The difference between the custom slide command, \vug, and the standard LaTeX
slide, is that \vug draws an oval boundary around the slide, and places a
horizontal divider between the slide title and slide content. The idea of
customizing the slide design can be taken further to include enhanced features,
such as:
* Inserting a background picture
* More elegant borders and designs
* Incoportating more than one slide design in the presentation. For example
defining three types of slides \vugtitlepage, \vugplain, \vugwithbackground
The slide environment is described further in Lamport's book, CTAN, and various
internet resources.
----------------
| Spell checking |
----------------
To check spell a non-LaTeX document, use
$ spell file
or
$ hunspell file
If used on a LaTeX document, LaTeX command names and directives will be
reported as spelling errors (unless the command spells out a legitimate word).
The latter program has a mode for properly handling TeX and LaTeX files
$ hunspell -t file.tex
Another option is the aspell program.
$ aspell --lang=en --mode=tex check file.tex
See this webpage for more.
If using LauTeX, the spelling package is available.
-------------------------
| Things to watch out for |
-------------------------
* color package
When using the color package (i.e. \usepackage{color}) in a picture
environment, in using the command \color{...} somewhere in the middle of a
scope, this sometimes causes that which follows the color command to shift
over.
Placing the color command in the beginning of a scope is a good idea
when possible, and so is eliminating white space within the scope.
For example, doing this
\put(2,0){\line(0,1){5}}
\color{red}
\line(1,0){5}}
Will cause line(1,0){5} to be shifted right, which is not the intention.
|
|
| _______
To resolve the problem, eliminate any white space (including line breaks)
\put(2,0){\line(0,1){5}\color{red}\line(1,0){5}}
|
|
|______
* Using \jput command.
The epic (extended picture) package defines a command \jput that is to
be used in a picture environment.
To use the \jput command you need to redfine the \put command as such:
\renewcommand\put{\jput}
Using \put in the joint environement creates a problem when there is a \put
inside of a \put (nested \put).
A quick fix to this, is to store the original \put in a command
(e.g. \origput) before renewing it to \jput, and then restoring the original
\put command when needed.
For example:
\let\origput\put % store original \put command as \jput
\renewcommand\put\jput
% Place here whatever drawing commands are to be used with the joint feature.
% When done, restore \put to its original definition
\let\put\origput
% Now you can nest \put commands (although these \put instances are no longer
% treated as \jput).
---------------
| Miscellaneous |
---------------
To cause LaTeX hilighting in emacs:
Type: Meta-x load-library
Enter: hilit19
Hilighting occurs whever Ctrl-L (screen refresh) is pressed.
To recover a file if vi has suddenly quit:
$ ex -r fas.tex
----------
| Unsorted |
----------
To generate bibliographic entries in tex
$ bibtex myfile
It's likely you'll need to run this more than once in conjuction with compiling
a LaTeX document.
Latex Syntax
\unrhd define operator *
* requires latexsym package
To find various latex files
$ kpsewhich
Display drivers: dvips, pdftex, xdvi, ps2pk, gsftopk, dvipdfm, dvipdfmx
updmap is a utility that creates font configuration files for any of the above
display drivers. It can be used for enabling a Map file and other functions
relating to the proper identification of fonts by these drivers.
See section TeX Admin Utils below for further details.
-----------------
| Troubleshooting |
-----------------
(1) Hebrew fonts recognized by Latex engine but unknown (not displaying) to xdvi
(e.g. mktexpk: don't know how to create bitmap font for rfrankb.)
* First thing to try:
$ sudo mktexlsr # rerecreate database of tex paths
$ sudo updmap-sys --enable Map=culmus.map # enable font map
It could be after an upgrade/update of tex/culmus the latter wasn't
performed.
* If this doesn't work, there could be a problem with updmap-sys enabling
the fonts.
There were times where I manually copied culmus' map files from a previous
version to the current version. In particular the map files in directories:
/var/lib/texmf/fonts/map/dvips/updmap
/var/lib/texmf/fonts/map/pdftex/updmap
followed by
$ sudo texhash
* There is also a workaround which manually specifies the map file to use.
It is described at the end of this subsection in uniman.
* Another thing that has happened in the past is that the tfm files in
/usr/share/texlive/texmf-local/texmf-compat/fonts/tfm/public/culmus
and vf files in
/usr/share/texlive/texmf-local/texmf-compat/fonts/vf/public/culmus
were corrupt (they all had the same size and didn't mach previous version).
Again, I manually copied the contents of these directories from a previous
version to the current.
If latex can't find your culmus_font.pfa (type 1 font), then there may be
an improperly configured link to your culmus fonts directory.
$ cd /usr/share/texmf/fonts/type1/public
This should have a link to your culmus directory.
If the link was improperly configured or is absent, then
$ sudo ln -s /usr/share/fonts/culmus /usr/share/texmf/fonts/type1/public/culmus
(2) Error: /var/lib/texmf/web2c/pdftex/latex.fmt made by different executable version
(Fatal format file error; I'm stymied)
For a full discussion see this webpage.
One of the respondents suggests running (as root)
$ fmtutil-sys --all
Note: the above util will place it in /var/lib/texmf/web2c/pdftex
I think running as user fmtutil --all will place it in
~/.texlive/texmf-var/web2c/pdftex
This should also fix the problem for user, although not for other users.
********************************************************************************
* - Metafont -
********************************************************************************
Metafont is a companion program to TeX written by Donald Knuth (author of
TeX) used to design and generate fonts for use with TeX. Metafont does not
offer a graphical interface. Rather the fonts are described using the Metafont
language, which is a programming language specifically taylored for font design.
I offer here a brief tutorial on Metafont. To learn Metafont more in depth, I
recommend Donald Knuth's "The Metafont Book". Also, see "The Metafont
Tutorial".
To work with Metafont, open a shell such as xterm or Gnome Terminal.
To run Metafont in interactive mode:
$ mf
The ** prompt expects an input file. To work interactively type \relax.
An * prompt will appear. You are now in scroll mode where Metafont expressions
and commands may be entered.
Whenever an error occurs, you will be bumped out of scrollmode, and will
receive the prompt ?
To return to scroll mode type "s".
Interactive mode is a good way to experiment with Metafont, or work through a
preliminary design for a glyph, but ultimately you'll be writing Metanfont
source files and compiling them. A metafont source file has suffix .mf
In the Metafont language a glyph is described mathematically, but ultimately
the glyph must be converted to a raster (bitmap) image for inclusion in a
document (e.g. Postscript, PDF). This is accomplished with the Metafont
compiler, mf. The resolution of the glyph's raster is an important parameter
that needs to be passed to the Metafont compiler. Resolution is measured in
dots-per-inch (dpi). ljfour is one of the available modes in Metafont. It
suits a 600 dpi font resolution.
TeX, LaTeX, xdvi, dvips, and other programs that access Metafont fonts, usually
expect to find these fonts somewhere in the TeX directory hierarchy. Where the
TeX directories reside depends on which distribution of TeX was installed and
how it was installed. I installed the TeX Live distribution through the Fedora
repository, and it placed most TeX related files in /usr/share/texlive/.
If you know where your TeX root directory is, then look for a directory called
fonts. This directory contains quite a few subdirectories. The three
relevant to Metafont are:
* source - contains source files for Metafont fonts
* pk - packed (bitmap) versions of compiled Metafont fonts
* tfm - character information (necessary for positioning and alignment)
In my TeX installation, the top level font directory is
/usr/share/texlive/texmf-dist/fonts.
This directory contains many fonts. Probably the most well known (in the TeX
world) are Donald Knuth's Computer Modern fonts.
The Metafont source files for them are located in
/usr/share/texlive/texmf-dist/fonts/source/public/cm
The packed version at 600 dpi are to be found in the directory
/usr/share/texlive/texmf-dist/fonts/pk/ljfour/public/cm/dpi600/
The tfm files for his fonts are in
/usr/share/texlive/texmf-dist/fonts/tfm/public/cm
In Unix like OSs ~ refers to the user's home directory (e.g. /home/jdoe), and
so does the shell variable $HOME. I'll use both.
TeX, Metafont, LaTeX and other TeX related programs know where to find system
installed fonts. However, a user who wishes to install fonts locally (i.e.
somewhere within his home directory), must do two things:
* Set up a local directory structure that is similar to the system's TeX
directory structure.
* Inform the TeX software about it.
The TeX software will then know how to locate the locally installed font(s).
First, create the local texmf directory
$ mkdir ~/texmf
This directory will be the local TeX/Metafont directory for the user.
Next, create the subdirectories relevant to Metafont:
$ mkdir -p ~/texmf/fonts/source # where source files go (*.mf)
$ mkdir ~/texmf/fonts/pk # where compiled (bitmap) fonts go (*.pk)
$ mkdir ~/texmf/fonts/tfm # where font metrics files go (*.tfm)
To let TeX and Metafont know about the local texmf directory, you'll need to
define an environment variable LOCALTEXMF.
If using the BASH shell, issue the command
$ export LOCALTEXMF="$HOME/texmf"
To make persistent, add the following line to your .bashrc file
LOCALTEXMF="$HOME/texmf
Assume the source file for the font to be compiled is
~/texmf/fonts/source/myfont1.mf.
(See further down for a complete walk through including the font source.)
To compile the Metafont source file in ljfour mode, in the command line enter
$ mf '\mode=ljfour; input=myfont1.mf'
The compilation will produce three files:
* myfont1.gf - A bitmap representation of the glyphs in the font for the
specified resoultion (for the example above its 600 dpi).
* myfont1.tfm - Contains dimensional and positioning information for the glyphs
in the font. It is used by TeX, LaTeX and dvi viewers.
* myfont1.log - Log of compilation process.
DVI drivers (e.g. dvips, xdvi) do not read gf files directly. They expect to
work with a "packed" form of the font bitmaps contained in the gf file. These
are fine tuned to display well on the object printer device (or display).
As such, the gf file must first be converted into this packed form. The packed
files have extension .pk.
See this thread about the difference between the gf and pk formats.
To create a packed (pk) version of the font raster, use the command gftopk
$ gftopk myfont1.gf
For xdvi to locate the necessary files (myfont1.tfm, myfont1.pk) they have to be
placed in the proper directories (see subsections System font directories
and Local font directories for more about these directories).
After completing the font design, you'll want to proof the font.
* Change directory to where the font source files are located
$ cd ~/texmf/fonts/source/
* Compile the source file with Metafont
$ mf '\mode=localfont; mag=magstep(0);' input myfont1.mf
(Compilation is also possible in interactive mode.)
* Create a proof readable by xdvi using the gftodvi utility.
$ gftodvi myfont1.600gf
If you receive an error "gftodvi: fatal: tfm file `gray.tfm' not found."
then follow the procedure titled "Error ...", just below.
* To display the font proof
$ xdvi myfont1.dvi
Error "gray.tfm not found"
--------------------------
An additional special font used in generating the proofs called "gray" is
needed to successfuly run the command "gftodvi myfont1.mf".
In my installation the font source is located at
/usr/share/texlive/texmf-dist/fonts/source/public/knuth-local/gray.mf
It did not, however, come with a corresponding font metric file gray.tfm.
I, therefore, needed to generate one from the source. To do so, invoke the
following command (see here for more details):
$ mktextfm gray
The resulting font metric file created was
~/.texlive2017/texmf-var/fonts/tfm/public/knuth-local/gray.tfm
To compile and install a Metafont file so that it is accessible systemwide
follow this procedure (note, some steps require root privelages):
$ mf '\mode=localfont; mag=magstep(0);' input myfont1.mf # compilation
$ gftopk myfont1.600gf # convert to packed form
Locate your texmf root directory and store it in a variable. For me that's
$ TEXMFROOT=/usr/share/texlive/texmf-dist
Copy the font files into the respective directories.
$ sudo cp -f myfont1.mf $TEXMFROOT/fonts/source/public/myfont
$ sudo mv -f myfont1.tfm $TEXMFROOT/fonts/tfm/public/myfont
$ sudo mv -f myfont1.600pk $TEXMFROOT/fonts/pk/myfont
Note, if the myfont directories have not been previously created, they will
need to be created before implementing the above procedure.
$ sudo mkdir $TEXMFROOT/fonts/source/public/myfont
$ sudo mkdir $TEXMFROOT/fonts/tfm/public/myfont
$ sudo mkdir $TEXMFROOT/fonts/pk/myfont
Also, after the first time implementing the above steps, you need to regenerate
the TeX directory database using the command mktexlsr (see subsection "the TeX
directory hierarchy and mktexlsr" for more about it). This will let TeX know
the locations of files associated with the new font.
In case a previous version of this font has already been installed and used,
a cached version of the packed font exists in the user's cache. Xdvi will
continue to use the cached version until it is removed. To remove it, locate
your user's TeX cache directory. For me its ~/.texlive2017/texmf-var/
Within the cache directory locate where the packed version of the font is
and then remove it. For example
$ rm -f ~/.texlive2017/texmf-var/fonts/pk/ljfour/public/myfont/myfont1.600pk
This is a minimal reference to Metafont syntax and commands. Use it as a quick
reference. To learn how to design a glyph refer to the above noted literature.
As in TeX and LaTeX, comments in Metafont are preceded by the "%" symbol.
Metafont options
----------------
* tracingequations:=1;
% Turn on tracing of equations (in interactive mode)
* tracingonline:=1;
% Turn on tracing of equations on line
Binary and other arithmetic operators
-------------------------------------
* Addition: a+1
* Sutraction: a-1
* Multiplication: 2*a
, 2a
* Division: a/2
* Square root: sqrt
* Raise to power: **
* Integer division: div
, mod
* Rounding: floor, ceiling
, round
* Misc operators: abs
, max
, min
Pair operators
--------------
* up
= (0,1)
down
= (0,-1)
left
= (-1,0)
right
= (0,1)
origin
= (0,0)
* Parametric displacment from z1 in direction of z2: t[z1,z2]
(substitute a number for t)
* Length between two points: length(z2-z1)
* Dot product: (a,b) dotprod (c,d)
* Ensuring two lines are perpendicular: (z2-z1) dotprod (z4-z3) = 0
Angle operators
---------------
* sine in degrees: sind
, cosd
* Directions:
* dir
- Takes an angle as an argument and returns a unit vector
* angle
- Takes a vector and returns an angle
Coordinates
-----------
* Coordinates can be specified in a number of ways. Examples:
x1 = 100
y1 = 150
(x1,y1) = (100,150)
* z1
is equivalent to (x1,y1)
(Sorry, no 3D in Metafont)
e.g. z1 = (100,150)
1 can be replaced with any other index (i.e. 2, 3,...)
* Displacement of z2 from z1:
z1+z2
* A midpoint of two coordinates
(z1+z2)/2;
* Scaling a coordinate or coordinate pair by a:
ax1
az1
* Use parameterization to indicate a point on the line connecting z1 and z2
z1 + t(z2 - z1)
This is equivalent to the pair operator t[z1,z2] mentioned above
* Define z to be somewhere unspecified on the line connecting z1,z2
z = whatever[z1,z2]
Variables:
---------
Some examples:
z, y, z, t, whatever (unspecified), epsilon (1/65536), infinity (4096-epsilon)
Geometric transformations
-------------------------
* (x, y) shifted (a, b)
Shifts the vector represented by (x,y) by the vector (a,b)
Equivalent to (x + a, y + b)
or (x, y) + (a, b)
* (x, y) scaled s
Scales the vector represented by (x,y); s is the scaling parameter.
Equivalent to (sx, sy)
or s(x, y)
* (x, y) xscaled s
Scales only the x coordinate.
Equivalent to (sx, y)
* (x, y) yscaled s
Scales only the y coordinate.
Equivalent to (x, sy)
* (x, y) xscaled s yscaled t
This is pipiling a transformation, meaning (x, y) xscaled
gets pipelined to
yscaled t
.
Equivalent to (sx, ty)
* (x, y) rotated a
Rotates point (x, y) about (0, 0) by angle a
* (x, y) rotatedaround ((a,b),c)
Rotates point (x, y) about coordinate (a, b) by angle c.
* (x, y) reflectedabout (z1, z2)
Reflects point (x, y) about the line connecting the points z1 and z2.
Curves
------
* draw z1 .. z2
Draws a line between z1 and z2.
* draw z1 .. z2 .. z3
Fits a curve between z1, z2 and z3.
The curve generated by Metafont abides by the following rules:
* Passes through all specified points.
* Keeps the curvature at a minimum at the points specified.
* Avoids inflection points between any two points in the sequence (if
possible).
* draw z1{up} .. z2{(1,5)} .. z3{dir 5} .. z4{down}
The curly braces appended to some or all of the drawing points affects the
slope of the curve at those points.
{} contains a direction.
e.g. {dir 60}
, {left}
, {z2-z1}
(direction of vector z2-z1)
* draw z1 .. z2 .. z3 .. z4 .. cycle
Draws a closed curve consisting of z1-z4.
Pens
----
* To apply a particular pen:
pickup pentype scaled x;
* Types of pen:
* pencircle - A circular nib (can be made oval with xscaled/yscaled)
* pensquare - A square nib (can be made rectangular with xscaled/yscaled)
* penrazor - A razor nib
Drawing, filling, erasing commands
----------------------------------
* draw
- Draws a path of thickness based on the currently "picked up pen".
* fill
- Fills the region inside a path (path must be closed - end with cycle).
No thickness is added.
* filldraw
- Both draws a path using current pen, and fills the enclosed region.
* drawdot
- Draws a dot at the given point.
* undraw
, unfill
, unfilldraw
, undrawdot
- Reverses the action of the above for
the given path/dot.
* cullit
- Tells Metafont to do a complete erase (turn the pixel in the
affected region back to 0)
e.g. cullit; unfill c; cullit;
Shorthand notation is: erase fill c = cullit; unfill c; cullit;
Pictures
--------
* currentpicture
- variable containing current pixel pattern.
* picture v[]
- defines an array of pictures.
e.g. v1 = currentpicture;
* clearit
- clears the current picture (automatically done at the beginning of
beginchar).
Magnification and resolution
----------------------------
Mode setting:
mode_setup
A subroutine that initializes all the appropriate mode related
variables: pt, pc, in, bp, cm, mm, dd, cc (see p. 92)
mode_setup does so in accordance with the value of \mode and mag
e.g. \mode=mydevice; mag = 2;
Device has a certain resolution, and that is one consideration in
relating the above variables to pixels.
The other is magnification.
Sharped (#) quantities (p. 92 in Metafont book):
These are used to define ad hoc dimensions (e.g. em, x_height)
(that are not from above list, pt, pc, etc.).
e.g. em# = 2mm#;
Defining pixels for ad hoc dimensions:
e.g. define_pixels(em, x_height)
This is equivalent to: em := em#*hppp;
Boxes
-----
To build a character use the subroutine
beginchar(code,w,h,d)
- Code can be an integer from 0 to 255, or a string (e.g. "A");
- w is width of box enclosing character.
- h is height above baseline of box enclosing character.
- d is depth below baseline of box enclosing character.
When TEX compiles a document it calculates character spacing and positioning
based on the dimensions of the characters' boxes (available in the font
metric file) rather than bitmaps. Thus the above parameters should be chosen
carefully.
Horizontally, TeX will arrange boxes one next to each other, although, not
necessarily touching. The w parameter tells TeX how much horizontal space
a character occupies.
Vertically, TeX must keep track of both interline spacing, and character
alignment on a line. Both the h and d parameters are used for that.
In general to place characters on a line, the baselines of the characters are
aligned.
A fourth parameter describing the character box is italic correction.
It may be set with the command "italcorr".
All dimensions should be given as # quantities.
For example:
"My character"; % A descriptive string describing the character
beginchar(127,c_width#,c_height#,c_depth#)
(c_width etc... should be defined beforehand)
Examples of Commands
--------------------
* draw (-100,50)..(150,50)
* drawdot (35,70)
* labels (range 1 through 6)
* showit
* shipit
* end % complete Metafont session
In the following example you'll be creating a font whose character set consists
of fifty glyphs, each shaped like a single cycle of a sinusoidal waveform. The
height (corresponding to amplitude) is fixed and the width (corresponding to
wavelength) varies from 1 mm to 11 mm.
The Metafont source code consists of two files:
(1) The first file, wfsin.mf, contains two subroutines:
(i) sinwave, which takes four arguments:
* charord: an integer specifying the position of the character within the
character set
* sinwidth: width or wavelength of sinusoidal waveform
* sinheight: height or amplitude of sinusoidal waveform
* L: Number of control points for a single cycle of the sinewave
The purpose of this subroutine is to produce a glyph in the shape of a
single cycle of a sinewave of a given wavelength and amplitude.
(ii) genchars, which takes one argument:
* height: the sinusoidal waveform amplitude
This subroutine calls the sinwave subroutine fifty times, each time
generating a sinusoid of a slighly larger wavelength.
(2) The second file, wfsini.mf, loads the first file so as to have the two
subroutines at its disposal. It then calls the genchars subroutine with
an argument of 1 to produce sinewaves of amplitude 1.
The shipit command at the end of the subroutine creates the usable
font files, which consist of the bitmap file (wfsin.gf) and the font metric
file (wsin.tfm).
The first step in this example is to create the file
~/texmf/fonts/source/wfsin.mf, and paste into it the following Metafont code:
% A character set comprised of "sinewaves" of fixed amplitude and varying
% frequencies
mode_setup;
define_pixels(mm);
pi := 3.14159;
% one cycle of a sinewave
def sinwave(expr charord, sinwidth, sinheight, L) =
% dirx := 1/L; % x-component in slope vector
beginchar(charord,sinwidth*mm#,sinheight*mm#,sinheight*mm#); "A sinewave";
path p;
pickup pencircle scaled 0.5pt;
numeric xn, yn, sinarg;
for i = 0 step 1 until L:
xn := i/L; % x-normalized: 0 <= xn <= 1 (xn between 0 and 1)
sinarg := 360*xn; % convert to degrees
yn := sind sinarg; % y-normalized: -1 <= yn <= 1 (yn between 0 and 1)
x[i] = xn*w; % actual x-coordinate: 0 <= x <= w
y[i] = yn*h; % actual y-coordinate: -h <= y <= h
diry[i] := (sinheight/sinwidth)*2*pi*cosd(sinarg); % y-component of slope vector assuming x-component = 1
%show (1,diry[i]); % slope
endfor;
%p = z0 for i = 2 upto L: .. z[i]{(dirx,diry[i])} endfor; % build the path
p = z0 for i = 2 upto L: .. z[i]{(1,diry[i])} endfor; % build the path
draw p;
%penlabels(0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20);
%"Box dimensions"; % uncomment first comment for debugging
%show (w,h); % uncomment first comment for debugging
endchar;
enddef;
def genchars(expr height) =
for j = 0 upto 25: % corresponds to a varying width between 1mm and 6mm
width := 1+5*j/25; % width of sinwave cycle in mm
sinwave(97+j,width,height,40); % create character
endfor;
for j = 0 upto 25: % corresponds to a varying width between 6mm and 11mm
width := 1+5*(25+j)/25; % width of sinwave cycle in mm
sinwave(65+j,width,height,80); % create character
endfor;
enddef;
Next, create the file ~/texmf/fonts/source/wfsini.mf, and paste in the following
Metafont code:
% Sinewave font of amplitude 1 mm
input wfsin;
genchars(1); % call genchars subroutine specifying amplitude of 1
shipit;
end;
The Metafont source was conveniently split into two files inorder to make it
easy to generate character sets of sinewaves of arbitrary amplitudes.
For example to generate a character set of sinewaves with amplitude 2 simply
modify "wfsini.mf" as such (and save under a different name, say wfsinii.mf);
% Sinewave font of amplitude 2 mm
input wfsin;
genchars(2);
shipit;
end;
To compile the font wfsini, invoke
$ mf '\mode=localfont; mag=magstep(0);' input wfsini.mf
The resulting files are wfsini.600gf, wfsini.tfm, wfsini.log.
The "600" refers to the bitmap resolution of the font.
To create a packed font, invoke
$ gftopk wfsini.600gf
The resulting file is wfsini.600pk.
Next, copy the packed font and the font metric file into the directories where
TeX expects to find them. For example
$ cp wfsini.600pk ~/texmf/fonts/pk/
$ cp wfsini.tfm ~/texmf/fonts/tfm/
To incorporate the font into a LaTeX document, create a file test.tex and
insert the following code.
\documentclass[english]{article}
\begin{document}
\newfont{\sinfonti}{wfsini}
\noindent
\sinfonti
mmmmmmmmmmmmmmmmmmmmmmmmmm \\
abcdefghijklmnopqrstuvwxyz \\
ABCDEFGHIJKLMNOPQRSTUVWXYZ \\
abCDefGHijKLmnOPqrSTuvWXyz
\end{document}
Notice, I had used the letters a-z and A-Z to refer to the sine wave glyphs in
the font. It is not coincidental that the lowercase and uppercase alphabet
map to the complete set of sine wave glyphs.
When creating the font, in the subroutine genchars, within the loop I set
the character glyph ordinals to be "65+j" and "97+j". The variable j is
incremented from 0 to 25. The ASCII code for "A" is 65, and for "a" is 97.
Thus, the sine wave glyphs within the font are assigned the same codes as the
characters in the uppercase and lowercase alphabet, whereby "a" maps to the
shortest wavelength sinusoid glyph, and "Z" to the longest.
It is obviously convenient to be able to typeset the sinosoidal waves offered by
the font using the Latin alphabet.
Compile the LaTeX document:
$ latex test
Display using xdvi
$ xdvi test
Error types:
* "strange path" whose "turning number" is zero.
Returns this error when trying to fill a loop that intersects itself.
The reason is that with such a loop Metafont cannot distinguish between the
inside and outside of the loop.
TeX Admin Utils
********************************************************************************
* - TeX Admin Utils -
********************************************************************************
Most TeX/LaTeX users are likely to install TeX through their OS distribution,
and will not need to tinker with the installation. Users who's TeX needs are
more sophisticated, however, will need to administer their TeX installation,
and may, thus, benefit from this section. TeX administration may be necessary
for many reasons, amongst them:
* Installing TeX directly (i.e. without a package manager).
* Installing packages through CTAN and from third parties.
* Installing and testing new fonts.
* Correcting problems caused by conflicting packages, or incorrectly
installated packages.
* Managing multiple TeX installations.
My main focus in this section is the TeX directory structure, and font
administration tasks.
Before proceeding I recommend taking a look at the TeX FAQ website.
It offers a wealth of background on TeX and its many resources, as well as
helpful advice on using LaTeX and related software.
----------
| TeX Live |
----------
TeX and related software are generally freely distributed software. CTAN is
the largest repository of TeX related software and packages. However, when it
comes to installaling TeX users will generally prefer a pre-packaged
distribution. The two major package distributions are TeX Live and MikTeX.
These come bundled with all the essential binaries, as well as a myriad of
packages, fonts, configuration files and documentation.
* MikTeX was designed specifically for Microsoft Windows systems.
* TeX Live is a cross platform distribution. There are three main packaging
schemes in this collection:
* TeX Live for GNU/Linux.
* MacTeX, a TeX Live distribution for the Mac OS X.
* proTeXt, an enhancement of MiKTeX for Microsfot Windows.
Most Linux users will use TeX Live.
For more about TeX Live follow the TeX Live link and its hyper links.
TeX Live comes with a powerful managment tool called tlmgr.
See its man page for more.
$ man tlmgr
------------------------------------------
| The TeX directory hierarchy and mktexlsr |
------------------------------------------
The TeX directory structure (TDS) is rather complicated and TeX software relies
on a special software library called Kpathsea library. Basically, its
purpose is to return a filename from a list of directories specified by the
calling program within the TDS.
Refer to this CTAN page and select the "Package documentation" link to download
the Kpathsea manual. This manual is is the authoritative document describing
the particulars of the TeX directory hierarchy.
The installation root for the TDS is by default /usr/local. Different OS
distributions may choose a different root. For example, in Fedora it's /usr.
To avoid confusion I'll refer to the installation root as prefix.
All TeX binary executables (e.g. tex, latex, pdflatex, xetex) are stored in
prefix/bin.
Architecute-independent files are stored in prefix/share.
The TDS root is prefix/share/texmf. This is where the bulk of TeX related
files reside. It contains a number of important subdirectories. For example
* fonts - contains font related files (metafont sources, links to system fonts,
bitmaps, font metric files and more)
* tex - contains subdirectories will all TeX and LaTeX related package files.
(e.g. style files)
* metafont - metafont related files, but not actual fonts (those are in the
fonts directory).
* dvips - contains dvips ps trick files, configuration files, files that
instruct dvips to load specific font maps, and more.
* xdvi - contains xdvi related files, including the global configuration file
XDvi.
TeX keeps a database of all files in its directory tree in a file named "ls-R",
which resides at the top level of the texmf tree directory.
A local texmf tree will contain a local ls-R file (e.g. $HOME/texmf/ls-R).
This file includes the location of package directories, style (sty) files,
mf/pk font files, etc.
The mktexlsr command searches through the texmf tree and updates the ls-R
database.
Invoke this utility after having manually installed or removed a TeX/LaTeX
package or font package.
$ sudo mktexlsr
Normally, packages installed via a package manager will have mktexlsr invoked
as part of the installation procedure.
Invoke mktexlsr locally when changes have been made to the user's texmf tree.
$ mktexlsr
To query for executable locations, installed maps, and kpathsea variables
with a TeX Live distribution issue the command
$ tlmgr conf
The runtime path configuration file for kpathsea is texmf.cnf.
Only modify this file if you know what you are doing!
----------------------------------
| Searching the TDS with kpsewhich |
----------------------------------
To search for a file in the TeX directory structure use the command kpsewhich.
It is a standalone front-end of the kpathsea library that can be used
to examine variables and find files.
For example, to search for the style file article.sty, type
$ kpsewhich article.sty
This package is part of a basic TeX installation, so it is sure to be found.
The resultant location for my installation was:
/usr/share/texlive/texmf-dist/tex/latex/base/article.sty
To search for culmus.sty, type
$ kpsewhich culmus.sty
Since this style file is not bundled with TeX by default, it will only be found
if installed and subsequently mktexlsr is invoked.
The resultant location for me was:
/usr/share/texlive/texmf-local/texmf-compat/tex/latex/culmus/culmus.sty
Notice from the search result that article.sty is located in a subdirectory of
the texmf-dist top level directory, whereas culmus.sty is located in a
subdirectory of the texmf-local top level directory. This is because
article.sty is an integral part of the TeX Live distribution, whereas culmus.sty
is not, and therefore, the installer placed it separately.
The kpsewhich command can also be used to obtain the value of a TeX
configuration variable. For example
$ kpsewhich -var-value TEXSYSCONFIG
When files of the same name exist in different locations in the TeX directory
hierarchy, kpsewhich will return the location of only one of them. However,
if invoked with the option --all, all locations are returned. For example
$ kpsewhich --all updmap.cfg
----------------------------
| TeX and non-Metafont fonts |
----------------------------
TeX/LaTeX and related programs are capable of using fonts created with systems
other than Metafont, in particular, the Postscript font system and the True Type
font system. However, adding such a font (which is not part of your
distribution or part of a package) requires creating a compatibility layer that
makes the font understable to TeX/LaTeX and related programs.
Most likely you will not encounter the need for this, as your Tex installation
will already include all the necessary files to handle the Postscript and
TrueType fonts available with your system. However, under certain
circumstances, for instance, where you created (or purchased) a font and wish
to incorporate it into your TeX/LaTeX documents you will need to know how to do
this.
* For Postscript fonts, the utility fontinst is available.
Refer to the fontinst manual and man page for more about it.
Also refer to this webpage.
* For TrueType fonts refer to this webpage.
---------------
| TeX font maps |
---------------
After installing a new font (i.e. one that doesn't come with the TeX
distribution) the last step is to add and enable its font map.
Tex font maps are files that contain information necessary for TeX to translate
between its font naming system and the naming schemes of other font systems
(e.g. Postscript, Truetype). Manually creating and managing these map files
is not a trivial task. Tex distributions, however, come with a utility that
vastly simplifies the management of map files.
The updmap (short for "update map") utility manages TeX font maps. It updates
the font map files used by pdftex, dvips and dvipdfm.
Note, before using updmap run "mktexlsr" as root to make sure updmap can
find the map files it needs.
$ sudo mktexlsr
The updmap utility can be used to update font maps on a system level using
updmap-sys (equivalently updmap -sys)
$ sudo updmap-sys
or on a user level using updmap-user (equivalently updmap -user).
Updmap uses configuration files. The main configuration file used by updmap is
by default $TEXMFROOT/texmf/web2c/updmap.cfg. To locate all configrations
files, type
$ kpsewhich --all updmap.cfg
If you installed a font map (and ran mktexlsr), to enable it, type
$ updmap-sys --enable Map=mapname.map
To disable a font map
$ updmap-sys --disable mapname.map
mapname should be in the ls-R database. Do not specify full pathname to
a map file.
For a transaction log of updmap see /var/lib/texmf/web2c/updmap.log.
It is better not to use updmap locally (without "-sys") as that creates
local map files. In such a case subsequent processing of updmap-sys
will have no effect for the user who had used updmap locally.
He will thus have to run updmap locally to add the new font.
Local map files for TeX Live are stored in the tree ~/.texliveYEAR/texmf-var.
Substitute for "YEAR", the year of your TeX Live distribution (e.g. 2020).
Specific directories where updmap places its map files are:
~/.texliveYEAR/texmf-var/fonts/map/dvips/updmap
~/.texliveYEAR/texmf-var/fonts/map/pdftex/updmap
~/.texliveYEAR/texmf-var/fonts/map/dvipdfmx/updmap
To undo a local updmap remove the font map files from these directories.
See man page for a more complete description of updmap.
$ man updmap
Some things to note:
* Normally, font packages installed via a package manager will have updmap
invoked as part of the installation.
* Postscript font maps can be found in /var/lib/texmf/fonts/map/dvips/updmap
* A common pitfall when reinstalling or upgrading TeX/LaTeX is to copy the
$HOME/.texlive cache directory to a new installation. This often causes
problems with finding and displaying fonts. For a new TeX/LaTeX installation
delete the $HOME/.texliveYEAR directory (if the new installation is a
distribution from a different year, this is unnecessary, as the directory will
have a different name). The .texliveYEAR directory will be subsequently
regenerated anew when TeX or LaTeX deem it necessary, in a manner that is
consistent with the new installation.
After installing a new font and using updmap-sys to enable the given font map
file (e.g. mapname.map), the psfonts.map and pdftex.map files should updated
to include the new font map entries. If for some reason this is not happening
xdvi, dvips and pdflatex will not be able to find that font.
Here are some workarounds:
* For pdflatex:
Before \begin{document}
enter the Latex command
\pdfmapfile{=culmus.map}
Note, this will not help with xdvi and dvips.
* For xdvi, edit file
/usr/share/texlive/texmf-dist/dvips/xdvi/config.xdvi
Add the line
p+ mapname.map
* For dvips, use the -u option to add the map file manually
$ dvips -u +mapname.map file.dvi -o file.ps
-----------------
| Testing a font |
----------------
To test a font do as follows:
$ tex
At the prompt enter "testfont" (the double asterisk is just the prompt)
**testfont
You will be asked to provide the name of the font you wish to test:
Name of the font to test =
Type the TFM name of font and press enter.
For example type cmb10 which is part of Knuth's Computer Modern font series
(/usr/share/texlive/texmf-dist/fonts/tfm/public/cm/cmb10.tfm).
A single asterisk prompt will appear.
Enter the following two commands (omit the asterisk; it's just the prompt)
*\table
*\bye
The file testfont.dvi will be generated containing a table with the font's
characters. You can view it with xdvi
$ xdvi testfont
Alternatively, you can invoke pdftex and follow the above procedure. The file
testfont.pdf will be created, and can be viewed with a PDF viewer.
Graphical Applications
********************************************************************************
* - Graphical Applications -
********************************************************************************
------
| Gimp |
------
Gimp is a highly sophisticated, feature rich, open source image editing
program. It is often cited as a free alternative to Adobe Photoshop,
although there are differences in what each offers, and Photoshop is
purportedly more powerful.
Amongst the things Gimp supports:
* A comprehensive toolbox for drawing and editing images
* Various region selection schemes (i.e. rectangular, eliptical, freehand)
* Many options for each tool
* Numerous image formats
* Colors, gradients, patterns
* Rendering text on a path
* Transparency
* Multiple layers.
Working with Gimp relies heavily on layers, so here is more about that.
* An image can be constructed from any number of layers.
* Each layer can be edited or manipulated indepedently of other layers.
That is, erasing or modifying a shape or text object or pixels in one layer
does not affect the content of other layers.
* Layers can be merged completely (usually when work on the image is
complete), or merged selectively (e.g. two adjoing layers) when one no
longer needs the contents of the layers to be separate.
* When a new canvas is opened a non-transparent layer titled "Background"
is automatically created.
* Additional layers can be added and/or manipulated with the Layer menu or
Layers toolbox.
* Each new layer added is by default opaque (non-transparent), however, its
opacity can be adjusted in the Layers toolbox.
100% opacity means the layer is not transparent.
0% opacity means the layer is fully transparent.
However, an opaque layer is only opaque where pixels have been drawn or
renered.
* Layers are stacked in the order they are created. Reordering layers is
possible using a drag operation in the Layers toolbox.
* When merged, a higher layer's contents will "overwrite" that which is below
it. Since all but the first layer are initially opaque
then higher layers on the stack overwrite those beneath them.
* When a layer with a certain degree of transparentency is merged with
another layer the portions will merge together in a way comensurate with the
degree of transparency.
It should be noted that Gimp works on bitmap images rather than vector
graphics images. This means that once a shape or text object has been merged
into the base image layer, it can no longer be edited as an indepdent object.
Although prior to merging, the shape or text can still be manipulated.
I am not an artist, nonetheless I find gimp very useful for a number of
things. Here are some of them:
* Print or export an image of any format to a pdf file
* Basic editing of images
* Thresholding.
One can use gimp to threshold a scanned image. I'll assume the scanned
image is in tiff format. Open the tiff image with gimp and apply a
compression (e.g. jpeg or similar compression).
Gimp will often assume the image is scaled to a resolution of 72dpi.
The resolution should be changed to whatever it was scanned at.
Note, When using SANE's scanimage" command use the option --format=tiff
and scanimage will autimatically embed the resolution information in the
tiff file.
With that done, bring up the threshold tool (Accessible through the
"Colors" menu, or "Tools:Colors Tools" menu) and apply thresholding.
The degree of thresholding can be adjusted until a satisfactory result
is obtained.
---------
| Xournal |
---------
Xournal is a notetaking and sketching graphics utility similar to "jarnal".
It's also great for annotating pdf documents, as well as filling out pdf
forms (by overlaying text and freehand scribbles and drawings).
Especially useful as a teaching aid.
(See website).
-----
| GRI |
-----
gri - A scientific plotting script based program.
GRI Help:
file:/usr/doc/gri-2.4.4/html/InvokingGri.html
---------
| Gnuplot |
---------
gnuplot is a scientific plotting program.
(See also section Mathematical Graphing Utils.)
PDF viewers
********************************************************************************
* - PDF viewers -
********************************************************************************
The are many PDF viewers out there. Here are a few that I've used.
* evince - A multiple document format reader. It can read PDF, Postscript,
tiff and more. It is GNOME's default document reader.
* xpdf - A XpdfWidget/Qt based pdf viewer.
Accompannying command line tools come with it.
For more about it see the xpdfreader homepage.
* acroread - A port of Adobe Reader for Linux. No longer maintained by Adobe.
Saw it on AUR (Arch Linux user repository) in 2020.
* llpp - A viewer with vim like commands (available for Arch Linux)
Some of the more common commands within llpp:
help/h - Get help
space - Go to next page
del - Go to previous page
m - Create a named bookmark :: ~ - create quick bookmark
' - Wwitch to bookmark mode (use this to jump to a bookmark)
> < - Rotate
F - Goto to link
r - Reload document
= - Show current position (includes page number)
Ctrl-p - Launch a command with the document path as an argument
y - Select link and paste its description to clipboard
u - Dehighlight
, - Decrease/increase page brightness
# zoom
Ctrl-0 - To unzoom
Alt-c - Center view
To open an encoded pdf document
$ llpp -p docname.pdf
* epdfview
Text Document Conversion Utilities
********************************************************************************
* - Text Document Conversion Utilities -
********************************************************************************
To convert from various document formats to others a variety of tools are
available.
* pandoc - converts to and from many formats including text, latex, odt,
docx and many more.
The philosophy behind the program is to markdown the source document,
and reconstruct it with the appropriate formatting in the target document.
For example to conert a latex file to odt format (used by Libre/Apache Office
and many other applications)
$ pandoc infile.tex -o outfile.odt
(For more see Pandoc documentation)
* tex4ht - converts latex documents to HTML.
From there, other tools can be used to convert to other formats such
as odt.
* Online conversion from pdf to doc:
www.pdftoword.com
* To convert from text to pdf
$ soffice --convert-to pdf filename.txt
See here for additional possibilities.
VIM Essentials
********************************************************************************
* - VIM Essentials -
********************************************************************************
VIM is an extremely powerful text editor. I list here only a very small
fraction of its features. Its on-line help feature covers the full capability
of vim.
Just to give you an idea of how powerful vim is, version 8.1 of vim has about
177 thousand lines in its help documentation. It would take about a 4000 page
book to contain it.
You can launch vim by specifying a file name or not:
$ vim
$ vim file.txt
You can launch vim in read-only mode:
$ vim -R file.txt
$ view file.txt
Other invocations of vim are possible. See man page for more about that.
$ man vim
Two additional utilities that come with vim are:
* vimtutor
If you are new to vim, it is recommended to start with vimtutor
$ vimtutor
* vimdiff
This utility allows for easy comparison and editing of multiple versions of
a file:
$ vimdiff file1 file2 ...
Much of vim's behavior can be configure by editing the file .vimrc.
Once you have a good notion about how to use vim, you will inevitably be
modifying .vimrc to meet your needs.
In VIM there are four operating modes:
Normal, Visual, Operator-pending mode and Insert/Command-line mode
Vim was originally designed to operate in a terminal such as xterm.
A Gui version of vim is available gvim
Its main advantage is the ability to use the mouse to position the cursor
(although VIM supports mouse functionality when using an xterm and some other
terminals. For more about it, within VIM type "help mouse".)
The Gui version also provides menus for invoking many commands and options.
--------------
| On-line help |
--------------
Vim has a very powerful and comprehensive help feature built in.
You can request help, whilst in your document. It will add an additional
pane to display the help. You can navigate within the help window as you
in your document.
Some common invocations:
:help help .................... How to use help
:help x ....................... Help on 'x'
:help x^D ..................... List all instances on 'x' (:help subject)
:help user-manual ............. Comprehensive user manual
^] jumps to a subtopic, and ^T or ^O jumps back
vimtutor ..................... run a hands on tutorial program (highly
recommended)
----------
| Encoding |
----------
Accepts ascii, iso8859-X, utf, and more
options: encoding, fileencoding
environment variables: $LANG
* UTF-8
If a terminal (e.g xterm) was started with a Unicode font containing glyphs
for the language you wish to edit in, then you can set VIM to edit in utf8.
simply type in vim
:set encoding=utf-8
To switch from English to Hebrew use the standard toggle you use in X-windows
(e.g. Alt-Shift). Note if no Hebrew keymap has been configured in X-windows
then one can be configured for VIM (:set keymap=hebrew_utf-8).
See /usr/share/vim/vim73/keymaps for all available keymaps (substitute your
version of vim for 73.)
To input unicode characters by their hex number representation use:
CTRL-V u 1234
can also use imap (e.g. :imap abc )
When using unicode BIDI, sometimes the text direction is not rendered
as you intended (the unicode BIDI algorithm using context to determine
how to render bi-direction text). To override the behavior of the algorithm
A few non-printable characters are available (for more follow this link).
PDF U+202C POP DIRECTIONAL FORMATTING End the scope of the last LRE, RLE, RLO, or LRO.
LRE U+202A LEFT-TO-RIGHT EMBEDDING Treat the following text as embedded left-to-right.
RLE U+202B RIGHT-TO-LEFT EMBEDDING Treat the following text as embedded right-to-left.
LRO U+202D LEFT-TO-RIGHT OVERRIDE Force following characters to be treated as strong left-to-right characters.
RLO U+202E RIGHT-TO-LEFT OVERRIDE Force following characters to be treated as strong right-to-left characters.
(treat as isolated text)
LRI U+2066 LEFT-TO-RIGHT ISOLATE Treat the following text as isolated and left-to-right.
RLI U+2067 RIGHT-TO-LEFT ISOLATE Treat the following text as isolated and right-to-left.
FSI U+2068 FIRST STRONG ISOLATE Treat the following text as isolated and in the direction of its first strong directional character that is not inside a nested isolate.
Some useful commands in regards to employing alternative encodings:
* ga - shows the decimal, hexadecimal and octal value of the character under
the cursor.
* g8 - shows the bytes used in a UTF-8 character, also the composing
characters, as hex numbers.
* To enter a unicode character via its code type in insert mode:
CTRL-V u XXXX
where XXXX is the hex value of the 16 bit unicode character (e.g. 05D0)
* To list all available keymaps:
:echo globpath(&rtp, "keymap/*.vim")
--------
| Hebrew |
--------
Setting vim to work with Hebrew
:map :set norl nohkmap
:map :set rightleft hkmap
:imap :set norl nohkmapa
:imap :set rl hkmapa
:set allowrevins
(This can be inserted into the .vimrc file. Make sure to disable F9 and F10
in you window manager program if using those function keys)
You can open vim in Hebrew mode using the command line option -H
$ vim -H file
To use reverse push mode, and editing inside the command line (given you
set allowrevins as above) press Ctrl-_
--------------------
| Working with files |
--------------------
Some commonly used commands relating to files:
:e {file} ....................... Edit file.
:gf ............................. Edit file whose name is under cursor
:CTRL-^ ......................... Edit previous file in buffer (can be used
for toggling between editing two files)
:r file ......................... Inserts file from point of insertion
:f[ile] or CTRL-G ............... Displays the name of the current file
--------------
| Cursor Motion|
--------------
For a complete reference on available navigation and motion related commands,
issue
:help motion
Some common navigation/motion related commands:
G ............................... move to end of document
gg .............................. goto first line in document
gu .............................. make selection lowercase
gU .............................. make selection uppercase
CTRL-O .......................... jumps to previous place in jump list
CTRL-I .......................... jumps to next place in jump list
Scrolling (type help scrolling for full documentation)
z ........................... positions page so line with cursor is at top
z. .......................... " center
z- .......................... " bottom
zt .............................. Like z, but leaves cursor in same column
z+ .............................. ?
z^ ............................. ?
CTRL-E .......................... scroll down (1 line without arg)
CTRL-F .......................... scroll down by 1 page
CTRL-D .......................... scroll down by 'scroll' option (default: half screen)
CTRL-Y .......................... scroll up (1 line without arg)
CTRL-B .......................... scroll up by 1 page
CTRL-U .......................... scroll up by 'scroll' option (default: half screen)
---------
| Windows |
---------
In Vim its possible to split the terminal window into multiple panes,
arranged vertically and/or horizontally.
Some commonly used commands for manipulating windows:
CTRL-W s .......... (sp[lit]) ... split current window into two
CTRL-W CTRL-V ..... (vs[plit]) .. split current window vertically
CTRL-W q .......... (q) ......... quit current window
CTRL-W j ........................ move cursor to the next window down
CTRL-W k ........................ move cursor to the next window up
CTRL-W r ........................ rotate windows downwards/rightwards
CTRL-W R ........................ rotate windows upwards/leftwards
CTRL-W x ........................ exchange current window with next one
CTRL-W - ........................ decrease size of window by 1
CTRL-W + ........................ increase size of window by 1
CTRL-W = ........................ sets all window sizes equal
:z{nr} .......................... set current window height to N
:set winheight .................. this option is minimum window height
CTRL-W x ........................ exchange current window with next one
:sp filnam ...................... open filnam in split horizontal window
:vs filnam ...................... open filnam in split vertical window
vim -o file1 file2 .............. opens file1 and file2 horizontally split
vim -O file1 file2 .............. opens file1 and file2 vertically split
some options determining behavior of windows:
:set winfixheight/wfh (boolean) . sets active window to be of fixed height
:set winfixwidth/wfw (boolean) .. sets active window to be of fixed width
Can also use :wincmd instead of CTRL-W.
Following wincmd with whatever you would have put after CTRL-W
(e.g. :wincmd l)
This is useful for placing window commands in functions. For example
fun WinAdj()
" Resize window to have width of 40 columns
:40 wincmd |
" Fix width of window to whatever it is now (i.e. 40 columns)
:set wfw
" Move over to window on left
wincmd l
" Make other windows equal in size (except one that is of fixed size)
wincmd =
endf
---------
| Options |
---------
Vim has a myriad of options. Accessing and manipulating options is easy.
Here are some guidlines:
* For boolean options
:set option1 option2 ...
Sets options 1 and 2.
For a list of options get help on "set" and "options"
:set nooption1 nooption2 ...
Unsets options 1 and 2
* For numerical or string options
:set option=value
:set option:value
Set option to value (number or string. for string enclose in "")
:set option+=value
Add the value to a number option, or append to a string option
* In general
:set option?
Shows value of option
:echo &option
Prints to the screen value of option
--------------------------
| Mapping and Abreviations |
--------------------------
Vim allows you to define key sequences that when entered will cause a different
text to be entered.
For instance, if you wish to effect the behavior whereby when "abc" is typed,
the word "alphabet" will appear instead, you can do this using the "map"
command. This is very useful for defining shortcuts to commonly typed words.
:map keysquenceA keysequenceB
maps keysequenceA to keysequenceB
'imap','cmap','lmap','nmap' map the keysequence for only a specific
operating mode.
:ab lhs rhs
:iab lhs rhs
when lhs is typed followed by a space rhs is inserted
:una lhs ........................ remove abbreviation lhs from list
For specifying special characters
:help intro
Then search for Nul to place you in the right place in the help file
/Nul
-------------
| Visual Mode |
-------------
Visual mode is simlar to how you use the mouse to select text, except you use
the cursor keys to move about the area you wish to select. To do this:
1. Mark the start of the text with v, V or ^V
note: v is for standard visual, V for line-visual, ^V for block visual
2. Move the cursor to highlight text
3. Apply an operator command
You can change the starting point with o, O
options are: highlight, virtualedit,
For more info see visual.txt in vim help.
---------------------------
| Standard editing commands |
---------------------------
. ............................... Repeat last command
R ............................... Overwrite characters
gR .............................. Overwrite (tab also overwrites)
>{motion}, <{motion}, >>, << .... Shift line (def: shiftwidth=8)
:[range]<, etc, ................. Shift multiple lines
:reg a .......................... Display contents of register a
"a .............................. Use register a for next delete or yank
"ay{motion} ..................... Yank (copy) motion text into register a
"ayy ............................ Yank count lines into register a
"ap ............................. Paste text from register a but leave cursor
"gp ............................. Paste text and place cursor after text
:50put a ........................ Put text from a after line 50
:ce 80 .......................... Center line(s) between 80 columns
:ri 80 .......................... Right-align
:le 80 .......................... Left-align
:undo (u)
:redo (^R)
--------------------------------------------------------
| Switching case of words (capitalizing, uncapitalizing) |
--------------------------------------------------------
To uppercase:
gU followed by movement command (w and b are examples of movements commands commands commands commands)
To lowercase:
gu followed by movement command
Toggle case:
g~ followed by movement command
Examples:
$ gUiw # Make current word uppercase
$ guw # Make next word uppercase
Case also use visual mode in place of a movement command.
-----------------------
| Search and Substitute |
-----------------------
The general form of the substitute command is
:[range]s[ubstitute]/{pattern}/{string}/[&][c][e][g][p][r][i][I] [count]
The range field indicates the range of lines to operate on.
For example, to operate on lines 1 to 20:
:1,20 ......
To operate on lines 1 till the end of the file:
:1,$ .......
The count field indicates the number of lines to operate on from here.
Some other useful options are:
g = replace all occurances on the line
c = confirm (interactive replace)
For example to replace all occurences of xxx with yyy in file:
:1,$s/xxx/yyy/g
To substitute part of the search argument, use \1 \2 etc... in the
substitution argument.
For example
:s/\([Hh]ello\) there/\1 over there/
will replace "Hello there" with "Hello over there",
and "hello there" with "hello over there".
If you wish to perform more intricate substitutions then use
s/\=....//
The text following = will be evaluated as an expression
(See on-line help on "sub-replace-special" and "sub-replace-expression"
for more details.)
If a special character needs to be included in either the search or replace
field then precede it with the escape character \
For example to search for the occurence of "\item":
/\\item
Some search related commands:
/pattern ........................ Search for "pattern"
n ............................... Repeated latest search
N ............................... Repeated latest search in opposite direction
* ............................... Search for pattern under cursor (:help *
for more)
# ............................... Same as * but backwards
g* .............................. Same as * but will search as part of word
g# .............................. Same as g* but backwards
\ ..................... Consider pattern as single (separate) word
/\cpattern ...................... Match to pattern without regard to case
:noh ............................ Unhighlight search words
/e$ ............................. Search for an "e" at the end of a line
For more info on searching, invoke:
:help /
Use ":help subject" for info on constructing search patterns.
Use ":help subject^D" to list all items found
option: ignorecase .............. Can set or unset for ignoring case in searches
Examples:
* looking for all 9 or more digit numbers
/[0-9]\{9}
To cause all search matches to be highlighted set option:
:set hlsearch
To reverse unset option:
:set nohlsearch
To substitute a newline use \r (not \n)
-------------------------------------
| Matching parentheses, brackets etc. |
-------------------------------------
Use % to jump to a matching parenthesis or bracket or curly braces.
Use matchpairs option to change what is considered matching pairs.
Default is matchpairs=(:),{:},[:]
Note, can't do something like matchpairs=":", as characters to be matched
are the same.
The matchit package offers a more sophisticated matching aparatus.
To use it add to virmc, invoke
:packadd! matchit
Highlighting matching parenthesis, brackets etc.
Sometimes vim is configured to highlight matching parenthesis as you edit
or browse a document. To disable this feature during runtime, invoke
:NoMatchParen
To enable
:DoMatchParen
To avoid loading this plugin, place in .vimrc the line
:let loaded_matchparen = 1
For more about parenthesis
:help paren"
--------
| Syntax |
--------
This feature of Vim is used in highlighting text in computer languages,
and more.
Some commonly used commands:
:syntax on, :syntax off ......... Turn on or off syntax highlighting
:hi[ghlight] {group-name} ....... List highlight groups or just group-name
For help type:
:help syntax
Syntax highlighting is available for many file types (e.g. c, cpp, html,
matlab, and many more. (See /usr/share/vim/vim*/syntax directory)
The syntax highlighting is automatically chosen according the file name
extension.
Custom syntax files can be made - see help on syntax
To list all the current highlight groups, issue
:hi
Some options pertaining to syntax:
* synmaxcol (smc) - an option to specify maximum column in which to search
for syntax items. Set to zero to remove the limit.
Examples:
* Hilighting the word "Donation"
:syntax match donationMatch "Donation"
:hi def donationMatch term=reverse ctermfg=0 ctermbg=3 guibg=Yellow
- The first line defines the group "donationMatch" to be all matches of
the word Donation
- The second line causes all match instances indicated in donationMatch to
be highlighted according the scheme specified in the remainder of the line.
The rules specified in the example above, cause the following behavior:
- If a standard terminal is used then reverse the text.
- If a color terminal is used then highlight the word with a background of
color 3 (yellowish-green), and foreground of color 0 (black).
For more info on colors
:help color-xterm
The following example illustrates defining a syntax region that will
be highlited with a specified color.
:syntax region Mycomment matchgroup=Cbrackets start="" end="" concealends
:hi def Mycomment term=reverse ctermfg=6* guibg=Brown
The following behavior is expected with this syntax specification:
- On a black and white terminal the highlight of the region between
and will be highlighted by reverse highlighting.
- On a color terminal it will be highlighted by displaying the text in the
color refered to by 6* (this is xterm specific).
- On a gui terminal it will hightlight the text in brown.
The following is a list of colors and their codes:
0=black
1=darkred 9=red
2=darkgreen 10=lightgreen 22=darkgreen
3=gold 11=yellow
4=blue 12=lightblue 18=darkblue
5=magenta 13=lightmagenta
6=cyan 14=lightcyan
7=lightgray 15=white
8=gray 16=black
------
| Tags |
------
Tags are a powerful indexing feature that allows you to jump to a subtopic or
reference quickly and easily.
For example, you are editing a c-program, and you wish to jump to the
definition of the function under the cursor. If a tags file was
generated for the C-program you can simply press ctrl-] while the
cursor is on the function name and the cursor will appear at the location
where the function is declared. This may even be a different file.
To go back simply type ctrl-T.
Note that ctrl-] doesn't work if there is whitespace or special characters
in the tagname. You may select the text using ctrl-v and then type ctrl-]
or simply jump to a tag by invoking
:tag tagname"
where tagname is the tag you wish to jump to.
To generate a tags file for C-programs as well as for a host of other
programming languages invoke (in a terminal, not vim)
$ ctags file1 file2 ...
To tell vim where one or more tags files you wish to use are located:
:set tags=tags_file_name
(e.g. ":set tags=./tags" will always look for a file name tags in the local
directory from where you launched vim).
* Custom tags
You may write your own tags file.
In a tags file each tag entry is on a separate line.
The most basic format is
tagname filename address
For example:
mytag /home/jdoe/myfile 10;"
Note: Normally, the tags must be sorted in lexical order, since Vim uses a
binary search (for performance reasons) to locate tags. If your tags file
was not sorted lexically then disable binary searching
:set notagbsearch
To turn on
:set tagbsearch
If you see the word "mytag" in your document and press Ctrl-], then
vim will open /home/jdoe/myfile (if not already opened) and jump to
the beginning of line 10.
You also specify a tag in a tags file to jump to a location via a search
by inserting something like this line in your tags file:
mytag myfile /mysearchexpression
Note that most special characters are interpreted literaly in
mysearchexpression.
Note, the tagname can contain any character except tab and return.
my tag. myfile 1:"
The name of the tag is "my tag." (with a space between "my" and "tag.").
-------
| Folds |
-------
Vim provides a feature whereby you can collapse paragraphs, sections or
other structured units into a single line. This facilitates perusing a document
script or program. The collapsed line is called a "fold", because its as
though you are folding a piece of paper to cover up a piece of the text,
only to unfold it to reveal it.
To create a fold, select a few lines in visual mode and type zf.
The lines you have selected will collapse into one line shown on a gray
background. On that line will be shown how many lines are in the fold,
and the content of the first line.
Once the fold is created, you can open it with zo, and close it with zc.
To close all folds in buffer, type zM.
To not show any fold, type zn.
To set back to normal fold view, type zN.
To invert back and forth, type zi.
To manouver through folds
* [z - move to start of current open fold
* ]z - move to end of current open fold
* zj - move downwards to start of next fold
* zk - move upwards to end of previous fold
To create a four column sidebar indicating where folds are located
:set foldcolumn=4
There are a few methods by which VIM handles folds.
Using the manual method, no assumptions are made regarding where to fold.
To tell VIM to use this method type
:set foldmethod=manual
For other methods type
:help foldmethod
VIM provides extensive documentation on folds.
Start with :help folds.
You can find an introduction in the user manual (a hyperlink to the relevant
chapter will be shown in the help screen).
---------
| Plugins |
---------
Plugins add additional functionality to vim.
Examples of plugins:
* pi_gzip - allows reading and writing of comrpessed files.
* pi_paren - Highlight matching parentheses.
For a list of global plugins (one automatically loaded) issue
:help standard-plugin-list
Matching Parentheses plugin (see subsection on matching parentheses)
This plugin is loaded automatically
to prevent loading seting the "loaded_matchparen"
:let loaded_matchparen = 1
When plugin is loaded disabling and enabling is done as follows:
:NoMatchParen disable matching parenthesis highlighting if already loaded
:DoMatchParen enables matching parenthesis highlighting if disabled
For more info consult help file pi_paren.txt
-------------------------------
| Formatting and Autoformatting |
-------------------------------
Use gq to format a paragraph.
The option used to set the formatting program is
formatprg program_name
"formatoptions" is an option containing formatting flags
t = Auto-wrap text using textwidth
c = Auto-wrap comments using textwidth, etc...
e.g.
:set formatoptions="t"
See fo-table for a complete set of formatting options.
Some other commonly used formatting options:
textwidth ....................... set to maximum width of line before
VIM breaks the line. set to 0 to disable.
tabstop ......................... number of characters from one tab to next
shiftwidth ...................... number of characters for each step of indent
:retab .......................... command to replace tab characters with white
space (uses 'tabstop')
Instead of :retab you can also accomplish the same thing with a substitute
command such as
:s/\t/ /g
You can also map it to a key press
:map x :s/\t/ /g<CR>
Remember to unmap it after completing the task
:unmap x
Display options
wrap is a boolean option that specifies if text is displayed wrapped
around the screen or not.
Vim considers a line, as one that ends with a newline character (for Unix
files), and does not automatically wrap the line when displaying.
If you wish to enable this behavior, issue:
:set wrap
To disable line wrapping
:set nowrap
-------------
| Programming |
-------------
Vim has its own programming language, by which you can write complex functions.
For example to act upon a condition, use the if then construct:
if condition
then
else
endif
Examples of conditions:
&encoding == "latin1" (if encoding is latin1 return true otherwise false)
somevar > 3 (checks if somevar is greater than 3)
----------
| Unsorted |
----------
Shift-K formats a man page for the word underneath the cursor.
CTRL-C cancels the command presently being typed
q ...... starts recording character sequence intro register
q ...... stops recording character sequence
termcap is an option that contains the present settings of the terminal
sleep n .... waits a specified number of seconds (n is a number)
filetype on/off ... turns on (off) file detection - useful for automatic
indenting, syntax hightlighting
au ... automatic commands to perform when loading a buffer
{visual}U, gU ... make text uppercase
{visual}~, g~ ... switch case of text
{visual}g CTRL-A ... add [count] to the number or alphabetic character
(for several lines, each line will be incremented by an additional [count]
Emacs
********************************************************************************
* - Emacs -
********************************************************************************
Emacs is an exteremely powerful text editor (and much more), that can be
launched and used from within a terminal or within its own GTK+ style window.
It supports numerous features such as syntax coloring, built in documentation,
Unicode support, and extensions. It is highly customizable with Emacs Lisp.
Emacs also comes with mail and news utilities, a calendar and more. There are
even games written for Emacs using text animation.
For more about Emacs refer to the Emacs Website.
Also refer to this Emacs Wiki site for various tutotrials on Emacs.
-----------
| GNU Emacs |
-----------
To launch emacs
$ emacs [files]
To launch emacs in the terminal (rather than open a separate window)
$ emacs -nw
Emacs works with key bindings. The key bindings are usually a combination
of keystrokes involving the two modifier keys:
C=Ctrl
M=Meta (Alt)
The two modifiers keys used extensively in operating emacs and are essentially
part of every command.
Some essential commands to get you started
* To Get help
C-h
C-h r (manual)
C-h t (tutorial)
* Undo
C-x u
* Exit
C-x C-c
* Movement
M-v (navigate backward by a screen's worth)
C-v (navigate forward by a screen's worth)
If you're just starting it is highly recommended to go through the Emacs
tutorial (C-h t).
--------
| XEmacs |
--------
XEmacs was once used to provide terminal based Emacs with a graphical interface
on X based systems. Today standard GNU Emacs is GTK+ based and provides a more
modern looking graphical interface than the classical X look provided by
XEmacs. If you wish to explore the differences install both and see which you
like better. See here for more about Xemacs.
To customize such face properties as "cursor textcolor".
Go to "options : customize : face" menu.
Type "text-cursor" (in minibuffer)
A customization buffer will show up in which you can change attributes.
Press the "set" button to set, and "save" button to save change in .emacs file
Note that if you don't know or remember the exact face property name just
press the return key when in the minibuffer and that will bring up all the
face properties. from there you can scan through and identify the one you need.
Of course other customizations are available, such as for variables, etc.
Alpine
********************************************************************************
* - Alpine -
********************************************************************************
Alpine is a feature rich terminal based email client, supporting Unicode,
and many other features necessary to handle modern day emails. It was developed
at the University of Washington, and is a successor to the Pine email
client.
Additional background on alpine can be found in this Wikipedia article.
It is available for both Unix and MS Windows.
----------------
| User Interface |
----------------
Alpine's user interface is text based, but very easy to use. The main screen
looks like this:
? HELP - Get help using Alpine
C COMPOSE MESSAGE - Compose and send a message
I MESSAGE INDEX - View messages in current folder
L FOLDER LIST - Select a folder to view
A ADDRESS BOOK - Update address book
S SETUP - Configure Alpine Options
Q QUIT - Leave the Alpine program
At the bottom of the screen (not shown) are textual menu entries.
You navigate through the different levels of alpine using single key strokes
or key stroke combinations. All the key strokes for a given context are shown
somewhere on the screen (usually at the bottom).
Being text based, alpine works fast even on low end equipment, and is easily
accessed on a remote computer using a terminal and remote shell program like
ssh.
The most important menu entry above is the "Folder List" (accessed by pressing
"L"). This is how you access your various mail accounts and folders.
If you navigate to your default mail folder
Folder List : Mail
alpine will bring up a list of folders.
INBOX is where all your incoming emails sit.
You can save individual emails into existing folders or a new folder.
I personally have near a hundred folders and thousands of emails saved in them.
I can easily access a saved email by knowing its context and accessing the
corresponding folder.
-------------
| Unix Alpine |
-------------
The first time you install alpine, you will likely need to configure a few
important settings, for which you will need to access the configuration menu.
To access the configuration menu, from the main screen navigate to:
SETUP : Config
Within Config you will see numerous options. Some of the important ones to get
you started are on top.
* Personal Name
Just enter your name the way you wish yourself to be identified in your
emails (e.g. Jack Spratt)
* User Domain
I never figured out what's best to set it to. Is it your own domain name,
or the domain name of your email account provider (e.g. gmail.com)?
* SMTP server (for sending)
Unless you have your own inhouse email server (e.g. sendmail), your computer
will most likely be using an SMPT server through which to send your emails.
Suppose Jack Spratt has an email account jspratt73282@gmail.com.
Then place in the SMTP server field
smtp.gmail.com:465/ssl/user=jspratt73282@gmail.com
* smtp.gmail.com:465 is the DNS name of gmail's SMTP server. The 465 is
telling alpine to use port 465 (the standard port for SSL) to communicate
with the server.
* ssl is an option which tells alpine to use secure communication with
the SMTP server. This is pretty much standard today.
* user=... is an option telling alpine as who you would like to authenticate.
If you leave it blank, you will be asked for your username the first time
you attempt to send an email in a given alpine session.
Note, there is no way to configure your email account's password in
Unix alpine, unlike PC alpine which will save your password. The good news is
you only have to authenticate once per session.
Note, Just incase there are still non-secure SMTP servers out there,
and you are using it (not recommended), configure the SMTP server field as
such:
your_smtpserver:25/user=jspratt
Non-secure SMTP uses port 25.
* Inbox path
The inbox path can take one of two forms:
* A local path to some directory on your computer such as /var/mail/jspratt.
If your inbox is in the standard /var/mail path, then you do not
have to touch the Inbox path setting.
This can be the case if your computer is running its own email server, or
you use a program like fetchmail to download your emails from your email
provider account to /var/mail/username, using a protocol such as POP3.
Note, if your Inbox is somewhere on your home network, then configure the
Inbox path to point to it via an IP address or name.
* The Inbox sits remotely on your email provider's servers, and is accessed
by you through IMAP.
Recall Jack Spratt and his gmail account.
You should configure your Inbox path as such
{imap.gmail.com:993/ssl/user=jspratt73282@gmail.com}inbox
Note, just incase there are still non-secure IMAP servers out there,
and you are using it (not recommended), configure the inbox field:
inbox-path={your_imap_server:143/user=jspratt}INBOX
Note, the default behavior of Gmail (and possibly other email account
providers) is to refuse IMAP connections. In order to use alpine with Gmail
you will need to enable IMAP through your Gmail web interface.
See below on how to do this.
------------------
| Collection Lists |
------------------
Alpine provides you with a way to better organize your email folders, by
setting up a Collections Lists. To set one up, navigate to
Setup : collectionsLists
This can also be useful for defining alternative email accounts which you wish
to access from alpine. For instance if you have two Gmail accounts, you can
define a collection list to reference your second Gmail account.
The other use for the collection list is to access folders other than your
Inbox. For instance, Gmail has other email folders such as SPAM, Trash and
more. If you setup a collection list for that gmail account it will let you
access all the different folders.
-----------------
| Mail Collection |
-----------------
The Mail Collection is a directory (either local or remote) that contains mail
files used to organize and store emails.
The default location for this directory is in the home directory (~/mail).
To save an email in a particular mail file, invoke "Save" (S) and type the
name of the mail file you wish to save to. If the file does not exist, alpine
will ask you if you would like the file to be created. To browse all existing
mail files in the mail folder invoke "To Folders" (^T). Maneouver through the
list and select the desired file.
To create a new file invoke "AddNew" (A), and type the name of the file to be
created.
If you wish to create sub-folders within alpine, do so outside of alpine.
These will then show up in the alpine browser, and you can create and save
emails to files within these sub-folders.
For example you wish to have a sub-folder for saving emails from friends
$ mkdir ~/mail/friends
Then, within alpine, you can create separate mail archives for each friend:
* friends/laura
* friends/mike
* friends/stacey
See here for more about mail collections.
----------
| Password |
----------
If you access an IMAP mail server using alpine, alpine will prompt you for a
password each time you open alpine afresh. Similarly, it will prompt you for a
password the first time you send mail via an SMPT server for that session.
Although, by default, passwords are cached for the remainder of the session,
they are forgotten once alpine is terminated.
To store IMAP and SMPT passwords in your home directory in encrypted form,
follow the following procedure.
First create the file .pine-passfile in your home directory.
$ echo > ~/.pine-passfile
This places a single empty line in the file, and will be your password file.
To use this password file with alpine, invoke alpine as such
$ alpine -passfile ~/.pine-passfile
Alpine will see that the file contains no passwords, so it will prompt you to
enter a master password. Choose a master password, and confirm.
It will then prompt you for the password you use to get into your email account
on the IMAP server on which your account resides. It will then ask if you want
to save this password into your passfile. Answer yes. Your password will be
saved in encrypted form. When you send mail using SMPT, you will be prompted
for the password, and subsequently asked if you want to save it. Again answer
yes.
Your master password will be the key to decrypt any passwords you subsequently
save into this file. The master password is the one password you will need to
enter afresh each time you launch alpine. The advantage is you only have to
have one master password to enter when launching alpine, instead of multiple
passwords you may need for accessing multiple IMAP accounts and SMTP.
You master password is stored encrypted in the directory ~/.alpine-smime/.pwd
If you ever need to reset your master password erase or rename that directory.
See here for how to remove the master password requirement for alpine.
-----------
| PC Alpine |
-----------
To configure alpine launch cmd.exe.
Change to directory containing alpine.
If there is already a .pinerc file, then run
$ alpine -install
and it will allow you to specify its location.
Note, you'll need to place ldap32.dll in C:/WINDOWS/system for alpine to run.
Additional configuration tips:
* To specify a different pinerc file
$ alpine -p Y:\account\jdoe\.pinerc
Replace jdoe with your user name.
* To display alpine related registry info:
$ alpine -registry Dump
* To clear alpine related registry entries:
$ alpine -registry Clear
* To set local mail folder to point somewhere other than default:
Navigate to
SETUP : Collection Lists : Mail
Enter path: (e.g. Y:\account\jdoe\mail)
* To modify .pinerc location or other parameters in REGISTRY open up a cmd
terminal (Start : Run : cmd)
In cmd terminal issue regedit command and locate appropriate
register entry.
------------------
| Alpine and Gmail |
------------------
It used to be that Gmail allowed standard IMAP access.
Nowadays, Gmail requires two factor authentication for accessing email
services, so although you can still access Gmail's IMAP server, the
authentication stage cannot be handled by IMAP alone.
There is a work around whereby you can obtain an App password from Google,
and use that to connect to Gmail's IMAP server. See below for more on that.
See also below for more about configuring IMAP and Gmail in general.
A second method involves XOAUTH2 authentication mechanism.
I will give you a basic outline, but you will likely need to read more about it
in this webpage for further details.
In order to force alpine to authenticate with Gmail using XOAUTH2, you will
need to tell alpine to use this method.
So whenever specifying an IMAP server and/or SMTP server, add the
auth=xoauth2 flag.
For example in alpine's setup, setting the inbox-path to
imap.gmail.com:993/notls/ssl/auth=xoauth2/user=jdoe@gmail.com
forces alpine to authenticate user jdoe using XOAUTH2.
Note, if you omit this flag, alpine will only attempt to authenticate in this
manner as a last resort.
Once you specify this flag in the inbox-path, when trying to open you inbox
for the first time, alpine will give you instructions on how to proceed in
configuring this method of authentication.
You will basically need to open a project on Google https://console.developers.google.com.
You will need to obtain credentials for this project.
These consist of a client ID and client secret.
These must then be entered into alpine.
To do so, enter alpine setup (S), and and select xoaUth2 (U), where you will
enter the client ID and client secret.
Next time you start up alpine using this authentication method, Google will
recognize alpine as being authorized to access email services.
---------------
| Documentation |
---------------
* Alpine comes with alot of on-line documentation and a context help feature.
* For more about alpine and its command line options see alpine man page.
--------
| Hebrew |
--------
With modern unicode terminals, Hebrew should be displayed correctly.
Pine:
To display Hebrew in Pine you need to add a display filter that will
display Hebrew text in reverse. See config: display-filters
Gmail
********************************************************************************
* - Gmail -
********************************************************************************
-----------------
| IMAP with Gmail |
-----------------
If want to use Gmail with your favorite email client (that is not Gmail's
web interface or App), then you will need to configure Gmail to allow the use
of IMAP.
Enabling IMAP involves two steps:
* Enabling IMAP
* Setting "Access for less secure apps" to ON
(double click on button - moving it doesn't work)
Viewing status of "Access for less secure apps" can be done through
settings accounts.
To change the setting browse Gmail's security settings.
Note, "Access for less secure apps" is no longer supported by Gmail!
See "App passwords" in the following subsection for a method that allows
you to use an unsupported third-party app.
-----------------------
| Two step verification |
-----------------------
If you have two step verification enabled, your email client will likely not be
equipped to deal with it. It will prompt for a password, but Gmail will not
allow you in because the second step in the verification process cannot be
actuated by your email client. In such a case, there is a way to bypass the
second verification process for specific apps.
First login to your Google account here.
Open above link in a private or incognito window if the link opens to an
account other than the account you wish to modify.
Select the security category in the left pane.
Scroll down to "How you sign into to Google" category.
Select 2-Step verficiation.
Scroll down to bottom and click "App passwords".
Follow instructions to add a password for the given email client app.
When using your email client use the given password, rather than your regular
password.
For more, check out this link.
Itunes
********************************************************************************
* - Itunes -
********************************************************************************
------------------------------
| Installing Itunes on Windows |
------------------------------
-------------------------------
| To transfer pdf files to IPAD |
-------------------------------
1. Open Itunes. No need to plug in IPAD yet.
2. Select "Books"
3. Open a folder where pdf files to be transfered are stored....
4. Drag files into Itunes window
Note, if the files don't drag check if permissions are correct.
If your Windows is a VM on a Linux machine and your are transferring files
from a shared folder, check in Linux for rw-rw-rw permissions, or at least
r--r--r--)
5. Plug in IPAD and select it.
6. Select "Books"
7. In "Sync books" area, check "sync all books".
8. At bottom press "sync" and wait till its done.
9. Press "done", followed by pressing the "eject" symbol.
Games
********************************************************************************
* - Games -
********************************************************************************
A small list of games in Linux:
Soccer:
rcsoccersim
bolzplatz2006
Risk:
ksirk
Minecraft
********************************************************************************
* - Minecraft -
********************************************************************************
-------------------
| Minecraft profile |
-------------------
Open up profile editor.
Various profile options can be set there.
* Java Settings (Advanced)
- "Executable" specifies java (can use to control which version of Java to
use)
e.g. /usr/java/jdk-9.0.4/bin/java
- "JVM Arguments" specifies options to send to java
e.g.
-Xmx1G -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:-UseAdaptiveSizePolicy -Xmn128M
-------------------------------
| Setting up a minecraft server |
-------------------------------
1. Download minecraft server from Minecraft website
2. Place minecraft_server.1.12.2.jar in .minecraft/mods directory
3. Run it
$ java -Xmx1024M -Xms1024M -jar minecraft_server.1.12.2.jar
edit eula.txt - change "false" to "true" (this accepts the license agreement)
4. Run again
$ java -Xmx1024M -Xms1024M -jar minecraft_server.1.12.2.jar
To run without gui
$ java -Xmx1024M -Xms1024M -jar minecraft_server.1.12.2.jar nogui
To quit from GUI, close window
To quit from command line, press CTRL-C
----------
| Flam mod |
----------
* Installation (must have forge 1.12 or later)
1. Download mod as zip file
2. Unzip and place flam*.jar in ~/.minecraft/mods
3. Place flam directory in ~/.minecraft/mods
4. Run minecraft and select flam mod.
5. An empty directory ~/.minecraft/flam will have been created.
Copy contents of ~/.minecraft/mods/flam into it.
$ cp ~/.minecraft/mods/flam/* ~/.minecraft/flam
Bitscope
********************************************************************************
* - Bitscope -
********************************************************************************
----------
| In Linux |
----------
Bitscope application suite:
* bitscope-dso
* bitscope-console
* bitscope-chart
* bitscope-meter
* bitscope-logic
In setup -- under connection select USB; under PORT etc.. select /dev/ttyUSB0
Make sure /dev/ttyUSB0 has read write permissions for other (chmod a+rw)
Press "wave" button (on right of screen) to send a sin wave to "Gen"
port in Bitscope.
If bitscope not working try:
$ modprobe ftdi_sio
(on Fedora 20 only works for a minute, and then I have to power off and
power back on bitscope-dso to get capture to work again.
Works fully on MS-Windows).
Installing bitscope on Linux.
Can be installed using deb package manager or rpm package manager.
To install on an unsupported Linux system simply copy binary applications
from a system where bitscope is already installed to the target system
in /usr/bin. Binaries for 64-bit system will not work on 32-bit system,
and vice-versa.
Additionally, copy etc/udev/rules.d/77-bitscope.rules
and /etc/bitscope/bitscope.prb.
All other files are documentation and icon files, and can be ignored.
---------------
| In MS-Windows |
---------------
In setup - under connection select USB; under PORT etc.. select COM1/2/3/4
(check in device manager which COM port is attached to serial USB - FDDI)
Hebcal
********************************************************************************
* - Hebcal -
********************************************************************************
Hebcal is a command line utility Hebrew calendar program written by
Danny Sadinoff. Source code is available from Github.
Usage:
hebcal [-acdDehHiorsStTwxy]
[-I input_file]
[-Y yahrtzeit_file]
[-C city]
[-L longitude -l latitude]
[-m havdalah_minutes]
[-z timezone]
[-Z daylight_savings_scheme]
[[month [day]] year]
hebcal help
hebcal info
hebcal DST
hebcal cities
hebcal warranty
hebcal copying
Available options and their explanation:
-a = Use Ashkenzi pronunciation
-c = Include candle lighting and havdalah times
-d = Display hebrew date for every day
-D = Display Hebrew date (only for days that would otherwise be displayed e.g. holidays)
-e = Print gregorian date in europian format (day.month.year)
-h = ?
-H = Print calendar data for current Hebrew calendar year
-i = Exclude second day Yov Tov of Galuyot
-o = Include Omer count
-r = Formatting feature
-s = Sedra of week
-S = Sedra of week for each day
-t = Print just today's date (greg and Heb)
-T = Print just today's Hebrew date
-w = Include day of week
-x = Exclude Rosh Chodesh
-y = In gregorian date display only last two digits of year
JAVA
********************************************************************************
* - JAVA -
********************************************************************************
The Java Virtual Machine (JVM) allows one to run applications written
for Java.
Eclipse is a software package that offers a programming development
environment for Java and should be installed if you are planning on programming
in Java. (Note, technically a text editor would be sufficient to program in
Java, but using eclipse facilitates the task tremendously).
There are multiple versions of Java and some options passed onto the
Java Virtual Machine (JVM) may not be supported by a given Java version,
so make sure your system launches the correct Java platform.
In Archlinux use the following command to help with java functionality
$ archlinux-java COMMAND
where COMMAND is one of the following:
* status - List installed Java environments and enabled one
* get - Return the short name of the Java environment set as default
* set - Force as default
* unset - Unset current default Java environment
* fix - Fix an invalid/broken default Java environment configuration
-----------------
| Troubleshooting |
-----------------
* When opening eclipse getting error:
"Unrecognized VM option 'UseStringDeduplication'"
If you get this error you probably need to set the default java version to 8.
In arch linux use the following to get a list of installed java environments:
$ archlinux-java status
java-10-openjdk
java-7-openjdk/jre (default)
java-8-openjdk
This shows that java-8 is installed, but it is not the default.
To set it to the default:
$ rchlinux-java set java-8-openjdk
This should do the trick.
If not installed then install it (see ArchWiki on Java).
GCC
********************************************************************************
* - GCC -
********************************************************************************
--------------
| Introduction |
--------------
gcc (GNU Compiler Collection) is a collection of programs, including a
compiler and linkers, that handle turning source code into an executable.
gcc contains a c and c++ compiler, but may have front ends that have been
written to handle other languages.
A few steps are involved in going from a high level language description of
a program to the actual executable machine code:
* Compiling into assembly
* Assembly to object (machine) code
* Linking (whether multiple modules to each other or to libraries)
A C file is suffixed with a ".c".
A C++ file is suffixed with a ".C".
A header file is suffixed with a ".h".
An object code is suffixed with an ".o".
The default executable name is a.out, although an alternative executable
name can be specified with the "-o" switch.
-------------
| Basic usage |
-------------
The most straightforward usage of gcc is in compiling a single file
$ gcc mycode.c
which produces the executable a.out
To produce an executable with the name myprog
$ gcc mycode.c -o myprog
For more complicated programs a programmer(s) will usually split the program
into multiple files or modules, in which case, all files must be specified
on the command line. For example
$ gcc subcode1.c subcode2.c
Almost invariably header files will be part of this modular design,
and they must also be specified. For example
$ gcc header1.h header2.h subcode1.c subcode2.c
gcc can also accept files in c and c++ in the same compile session.
For example
$ gcc subprog1.c subprog2.C
In case where more than one module makes up the program, and only a subset of
the modules have been modified, it is not necessary to recompile those modules
that have already been compiled. Rather, have gcc generate object files for
those files, and in subsequent compilations include as arguments to gcc the
source ("c") files that have been modified, and object ("o") files that have
been generated in a previous compilation for those modules which has not been
modified. For example
$ gcc header1.h subcode1.c libcode.o
To generate separate object files for library type modules use
$ gcc -O -c libcode.c
This invocation will produce libcode.o (the -c switch suppresses linking so
libcode.c does not require a main() section).
Subsequently the libcode part of the program need only be linked (and not
compiled).
$ gcc mainprog.c libcode.o
will compile only mainprog.c, but link libcode.o in the final executable.
-----------
| Libraries |
-----------
There are two kinds of libraries:
(1) Static libraries
In this case the library executable code becomes part of the program
executable. Linking library object code to the main program is done as
described in the previous subsection.
(2) Dynamically linked libraries (DLL)
In this case the library executable code is not part of the program executable.
When a call is made to a library function it executes code that resides
elsewhere in the file system. This is the more common way of structuring
software. The advantage of writing programs to use DLLs is that the program
occupies less space on disk. The disadvantage is the program is not self
contained. If it relies on a DLL that is not installed, the program will fail.
In fact, the installer for the program will likely refuse to install the
program until its library dependencies have been satisfied.
Repositories solve this problem by maitaining a database of library
dependencies for all software in that repository. When the installer
sets out to install a package, it checks for dependencies of that package,
as well as for dependencies of dependencies, and will install all the necessary
software that the package relies on.
C, C++, Linking, Shared Libraries
********************************************************************************
* - C, C++, Linking, Shared Libraries -
********************************************************************************
(This section is work in progress)
------------------
| Shared Libraries |
------------------
ldd - Prints the shared libraries required by each program or shared
library specified on the command line.
ldfconfig - See man page, or this webpage.
Virtualization with Qemu
********************************************************************************
* - Virtualization with Qemu -
********************************************************************************
qemu is a processor emulation program.
It works by creating a virtual machine (a machine within a machine).
The virtual machine could then boot an operating system from a disk image
and execute the commands while displaying all output on a window associated
with that virtual machine.
qemu works together with the kvm hypervisor (note the kvm hypervisor cannot
run in accelerated mode concurrently with other hypervisors running in
accelerated mode such as Virtualbox and VMWare).
----------
| Examples |
----------
To boot /dev/sr0 (cdrom drive) on a virtual machine
$ qemu /dev/sr0
Creates a virtual disk image of size 3G
$ qemu-img create -f qcow2 c.img 3G
(the qcow2 is the most versatile qemu virtual disk format,
allows saving screenshots and more)
Boot the guest operating system Fedora-8-Live from the image
Fedora-8-Live-KDE-i686.iso
$ qemu -cdrom Fedora-8-Live-KDE-i686.iso -hda c.img -m 256 -boot d
To boot directly from the cdrom you need to place the cd/dvd in the slot,
mount it (mount -t iso9660 /dev/cdrom /mnt), and issue the command
$ qemu -cdrom /dev/cdrom -hda c.img -m 256 -boot d
To put into effect a "seamless mouse" (one that doesn't get locked into
the window) add two options to your qemu command
$ qemu -cdrom Fedora-8-Live-KDE-i686.iso -hda c.img -m 256 -boot d
-usb -usbdevice tablet
If a seamless mouse is not being used, then the mouse will get locked into
the qemu window when running X-Windows or Windows. In that case, CTRL-ALT
returns the mouse to the host operating system.
You can launch an installed system directly:
$ qemu -hda c.img -m 256
You can also run qemu with sound and correct local time
$ qemu -hda c.img -m 256 -soundhw sb16 -localtime
You can use the qemu monitor to issue commands to an active qemu session.
Example of booting from a floppy and specifying cdrom to be "/dev/cdrom" and
harddrive to be "c.img"
$ qemu -fda /dev/fd0 -cdrom /dev/cdrom -hda c.img -m 256 -boot a
Note: The a: drive is mapped to fda, the d: drive is mapped to cdrom,
and the c: drive is mapped to hda
Example of starting up a Windows 95 installation
$ qemu -cdrom win95.iso -hda c.img -m 256 -boot c&
In this example the cdrom drive will always contain the windows installation
disk image. To allow cdrom to be used with different cd's, as well as a
floppy disk:
$ qemu -fda /dev/fd0 -cdrom /dev/cdrom -hda c.img -m 256 -boot c&
Example of launching a virtual machine using kvm framework
$ qemu-kvm -cdrom archlinux-2018.06.01-x86_64.iso -hda archdisk.img -m 2024
Example of starting an archlinux iso image to correct problems with a
troubled installation.
$ qemu-kvm -cdrom isodir/archlinux-2018.06.01-x86_64.iso -hda archlinuximg.img -hdb homeimg.img -m 2024 -net nic -net user -vga std -boot d
-----------------------
| commonly used options |
-----------------------
* To provide networking capability to your VM add the options
-net nic -net user
* To specify a standard VGA emulated graphics card
-vga std
* To specify RAM size
-m 2048
* To launch a virtual machine using a real harddrive:
$ qemu -net nic -net user -m 256 /dev/sdc
/dev/sdc refers to the harddrive. Substitute accordingly.
It can also be a true disk image saved as a file (e.g. mydiskimage.img)
To grab a USB device:
$ qemu -usb -usbdevice host:04e8:3228 -m 1024 /dev/sdb
In this example qemu will launch the bootloader located on the MBR of /dev/sdb.
Furthermore, qemu enables a USB hub, as well as grabs USB device
with vendor_id 04e8 and product_id 3228.
Use lsusb to determine this info.
Notice that the grabbed USB device will not be usable by the host OS.
The USB device specified is a Xeros Phaser3110, and I used this to print to
my printer from an old F15 installation.
In monitor mode issue the command "info usb" to verify that USB device is
connected.
------------------
| Virtual Consoles |
------------------
In Linux there are seven virtual consoles. You can switch between them
using the key combination Alt-F1-n, where n is a number between 1 and 7. The
seventh console is usually reserved for X.
--------------
| Special keys |
--------------
During virtualization special keys may be invoked in the guest:
1. special keys during graphical emulation:
* Ctrl-Alt-f - Toggle full screen
* Ctrl-Alt-n - Switch to virtual console "n"
- n=1 Target system display
- n=2 Monitor
- n=3 Serial Port
* Ctrl-Alt - Toggle mouse and keyboard grab
2. Special keys during non graphical emulation:
* Ctrl-a h - Print this help
* Ctrl-a x - Exit emulator
* Ctrl-a s - Save disk data back to file (if -snapshot)
* Ctrl-a t - Toggle console timestamps
* Ctrl-a b - Send break (magic sysrq in Linux)
* Ctrl-a c - Switch between console and monitor
* Ctrl-a Ctrl-a - Send Ctrl-a
------------------------------------
| Mouse adjustments in virtual guest |
------------------------------------
When using a mouse in the virtual guest, the host mouse coordinates
may not register correctly in the guest machine's instance of X11.
This can be compenstated for with "xinput", as follows:
No compensation is equivalent to -
* orig
$ xinput set-prop "QEMU Virtio Tablet" --type=float "Coordinate Transformation Matrix" 1 0 0 0 1 0 0 0 1
When adjusting display I use the following transformation:
* for 1920x1080 display
$ xinput set-prop "QEMU Virtio Tablet" --type=float "Coordinate Transformation Matrix" 1 0 0.001 0 1.00763 0.002 0 0 1
For more info refer to this webpage.
-------------------------------
| Using virtual machine manager |
-------------------------------
Instead of running qemu directly from the command line, it is possible
to use a graphical interface known as "virtual machine manager".
It offers similar functionality to that which Virtualbox's graphical
interface offers.
Some of what it offers is:
* create a virtual machine, and storage devices
* tinker with its settings (e.g. storage devices, memory, cpu, video)
In Linux the virtual machine manager that I use is virt-manager.
It is specifically designed to work together with qemu.
-------
| SPICE |
-------
Spice in the context of kvm is analogous to Virtual Box's guest utilities.
SPICE must be set up on the host machine.
In order to benefit from SPICE, spice-vdagent must be installed on the
guest machine (similar to installing guest additions on Virtual Box).
See the Spice user manual for more about it.
-----
| KVM |
-----
KVM is the hypervisor for qemu.
virt-manager is a GUI interface for creating and managing VMs.
virsh is a more extensive command line program to create manage and
monitor VMs.
To edit Virtual Machine's parameters:
$ virsh edit domain
where domain is the name of your guest.
You can use it to change any configurable setting in your VM.
KVM is based on a library called libvirt.
It is administered as a Systemd service. For example, to restart it
$ systemctl restart libvirtd.service
The man page provides a comprehensive reference on virsh
$ man virsh
------------------
| KVM and networks |
------------------
A subset of virsh commands relate to networking, a few of which are provided
here.
To list networks:
$ virsh net-list
Name State Autostart Persistent
--------------------------------------------
default active yes yes
mynet active yes yes
As with other facets of KVM, network configuration data is stored as XML files.
To display the contents of the xml file of mynet
$ virsh net-dumpxml mynet
network
3f2d43ba-2981-4333-c5e9-71ea3823d2a9
To edit the above XML file
$ virsh net-edit mynet
To display the contents of the XML file of mynet reflecting any changes you
have made to the XML file, but that have not yet been activated in the running
instance of KVM.
$ virsh net-dumpxml --inactive mynet
-----------------------
| Nested Virtualization |
-----------------------
* For nested virtualization
You will need to enable kvm to use nested virtualization.
To check if you have that capability at present:
$ cat /sys/module/kvm_intel/parameters/nested
If you get "Y", its already enabled.
If you get "N", you'll need to enable it.
Edit (or create the file /etc/modprobe.d/kvm-nested.conf)
Place the following lines in /etc/modprobe.d/kvm-nested.conf
options kvm-intel nested=1
options kvm-intel enable_shadow_vmcs=1
options kvm-intel enable_apicv=1
options kvm-intel ept=1
Now remove kvm_intel module, and add again
$ modprobe -r kvm_intel
$ modprobe -a kvm_intel
To check if you have it:
$ cat /sys/module/kvm_intel/parameters/nested
Should get a "Y" this time.
Next, add virtualization capability to the guest VM
$ virsh edit VM
Look for line starting with and modify by adding/changing
mode='host-passthrough'
(Note: the other suggestion I've seen mode='host-model' didn't work)
-----------
| UEFI boot |
-----------
To boot an OS that requires UEFI, you will need to create a virtual machine with
UEFI support.
In Fedora (if not installed already) install UEFI support
$ sudo dnf install edk2-ovmf
When creating the virtual machine using the the virtual machine manager's
wizard, you will encounter a check box "customize installation". Click that box.
This brings up the window with all the hardware specifications for your VM.
In the "Overview section" you can change the firmware from BIOS to ovmf.
Select "OVMF_CODE.fd" to do so.
See here for more.
When booting a Windows guest (and perhaps other OSs) you may be locked into a
minimal resolution mode (i.e. 800x600), and the OS will refuse to give other
resolution options.
The solution is to go into the UEFI menu during boot and select a different
resolution. Press ESC repeatedly as the VM is first booted and that will bring
up ovfm's menu. Navigate towards the resolution menu and select the desired
resolution setting. The change of settings only applies to the next boot.
Therefore, shutdown the VM and restart. At that point Windows should launch
straight into the resolution mode you selected.
See here for more.
KVM Hypervisor
********************************************************************************
* - KVM Hypervisor -
********************************************************************************
Virtualization with VMWare
********************************************************************************
* - Virtualization with VMWare -
********************************************************************************
Running vmware
$ /usr/bin/vmplayer
Configuring vmware
$ /usr/bin/vmware-config.pl
Note: I omitted all interactive questions, and let the config file use its
defaults. The original configuration files is backed up as:
/usr/bin/vmware-config.pl.backup
Example: Launch a Windows XP virtual machine stored in /opt/winxp.
$ vmplayer /mnt/linux/winxp/winxp.vmx
VMWare converter is an application that converts various virtual machines
or even existing machines to another vmware machine.
$ vmware-converter-client
VMware Tools Components are the equivalent of VBox's guest additions.
Some of the components are:
vmtoolsd (vmtoolsd.exe on Windows guests) - synchronizes time.
vmxnet - Network handling
vmware-user (VMwareUser.exe) - Handles copying and pasting between host/guest
Virtualization with Virtualbox
********************************************************************************
* - Virtualization with Virtualbox -
********************************************************************************
Virtualbox is an open source virtualization engine and hypervisor from
Oracle Corporation.
The two important commands to know are:
virtualbox - Main application with graphical interface
vboxmanage - Command line application to manage virtualbox including
launching VM's and saving their state.
------------
| VBoxManage |
------------
You can use VBoxManage to do everything you can do with graphical interface
and more.
Examples:
* To register a VM machine (for example there is an existing virtual machine on
some disk (e.g. flash-disk), and you wish to have it show up on virtualbox)
$ VBoxManage registervm filename
(e.g. VBoxManage registervm /home/sanhd/vbmachines/Fedora20san/Fedora20san.vbox)
* List virtual machines on host
$ VBoxManage list vms
* ???Copy name of VM
$ VBoxManage modifyvm "vmname" --longmode off
----------------------------
| Guests and Guest Additions |
----------------------------
Can view guest properties from host with
$ VBoxManage guestproperty enumerate "Name of VM"
Can view (and set) guest properties from within guest with
$ VBoxControl guestproperty enumerate
See chapter 4 of VBox manual.
VBox guest additions provide various functionality to enhance the
capabilities of the guest machine.
The various functionality is implemented by various drivers.
On a systemd based system they are initiated and managed by systemd services.
On a SysV based system rc.d setup scripts are used to initiate these services.
For systemd these are presently the services offered:
vboxadd.service -
vboxadd-service.service - sets up time synchronziation within the guest
vboxadd-x11.service - handles X11 and OpenGL part of the guest additions
(Note that X itself must use the "vboxvideo" driver when running on
a VBox virtual machine for proper functioning. Modern X-installations
will automatically load it if they detect the machine is VBox).
----------------
| Shared Folders |
----------------
Use Virtualbox GUI to specify a shared folder.
For instance specify that the folder /home/jdoe/financial on the host machine
be shared with a VM, and let the shared folder be called "fin".
Within the guest, you'll need to mount the folder (will need to have
guest additions installed).
Mounting a shared folder within a VM instance:
$ mount -t vboxsf -o uid=username sharedfoldername mountpoint
For the specific example:
$ mount -t vboxsf -o uid=jdoe fin /home/virtjdoe/financial
where virtjdoe is the user name in the VM.
Note, permissions on the host have to be set appropriately. That is user "jdoe"
has to have permissions to access financial and its contents. This is normally
not an issue if the directory is in your own home directory.
-------------
| USB Devices |
-------------
To access USB devices in a VM (and other privelages) add you user account to
vbox groups:
$ sudo usermod -aG vboxsf,vboxusers username
Note, you must install virtualbox extension pack to access USB devices.
-------------------------
| VirtualBox installation |
-------------------------
Best to use the Virtualbox repository to install Virtualbox.
For Fedora add the virtualbox repository file:
/etc/yum.repos.d/virtualbox.repo
Download its contents from:
https://download.virtualbox.org/virtualbox/rpm/fedora/virtualbox.repo
* Installing on Archlinux
Note: Can't install VBox guest into ArchLinux with Virtual Box's ISO.
Rather install using pacman.
$ pacman -S virtualbox-guest-utils virtualbox-guest-modules
--------------------
| Virtualbox Modules |
--------------------
* Sometimes the appropriate modules are not loaded automatically.
Manual loading of the modules can be achieved with "modprobe" or "insmod".
Drivers to look out in guest machine:
vboxvideo, vboxsf (shared folders), vboxguest
In host look for:
vboxdrv
Automatic loading of these modules is usually done with rc.d script
files in SysV, and systemctl service manager in systemd based installations
(see below),
so you usually don't have to load them manually.
Manually loading these drivers should only be necessary when something went
wrong and you need to troubleshoot.
To recompile and reload vbox drivers use:
$ vboxreload
(note: to recompile modules, need to have kernel headers installed.
make sure to have system headers whose version match operating kernel.
with dnf specify version number).
To get status, stop or start vboxdrv
$ /usr/lib/virtualbox/vboxdrv.sh status|stop|start|restart
* Second description of manual loading of modules
$ modprobe -a vboxguest vboxsf vboxvideo
For automatical loading at boot-time create a virtualbox.conf in
/etc/modules-load.d/
(See this webpage)
Alternatively, enable the vboxservice service.
After installation and loading of modules, enable the guest features with
$ VBoxClient --clipboard --draganddrop --seamless --display --checkhostversion
note:
VBoxClient manages the following features:
shared clipboard and drag and drop between the host and the guest;
seamless window mode;
the guest display is automatically resized according to the size of the guest window;
checking the VirtualBox host version
All of these features can be enabled independently with their dedicated flags:
A bash script that calls VBoxClient as above is /usr/bin/VBoxClient-all
For automatic loading when X starts up put that line in .xinitrc
-----------
| Log files |
-----------
Log files:
* Virtual box installation's log
~/.VirtualBox/VBoxSVC.log
This is a general Log for virtualbox application
* A virtual machine's specific log
~/vbox-directory/machinename/Logs/VBox.log
* Installation related logs
/var/log/vbox-install.log
/var/log/vboxadd-install.log
/var/log/vboxadd-install-x11.log
/var/log/VBoxGuestAdditions.log
-----------------
| Troubleshooting |
-----------------
(1) Sharing virtual machine between users
If sharing a virtual machine you will inevitably run into a problem
whereby Virtualbox writes its files using umask 022 owned by the
login user with the same group.
It does not respect the user's or system umask setting.
A potential solution is to masqerade as the owner of the virtual machine.
This can be accomplished using samba (and perhaps nfs).
See netman for more info.
Another option is to create an alias and correpsonding sudo entry to
change permission for the virtual machine folder and its contents.
Also, create an alias to change permissions back to owner.
The first alias will be invoked prior to starting the VM,
and the second after stopping it.
The alias will ovbiously have to invoke sudo, since one user can't change
the permissions to that of another user.
(2) USB on Virtualbox
* None of the USB devices connected show up on VBox's menu
Try installing USB extension pack.
Go to VBox site www.virtualbox.org, select downloads, select extension
pack with version number as close to the version of your VBox.
In dialog box select "install" (will be prompted for adminstrator password).
Also add user to "vboxusers" group.
Reboot computer, and your USB devices should appear in the menu.
* USB controller on Win7
If Windows fails to install Universal Serial Bus (USB) controller driver
on the virtual machine, check which kind of controller is selected
in virtual machine's settings. If its USB3, change it to USB2.
That should do the trick, and windows will then install the USB controller.
* USB3 device plugged into USB3 port gives error when virtual machine
attempts to capture device (VERR_PDM_NO_USB_PORTS).
Solution: if virtualbox extension pack is not installed, install it.
Then go to machine settings (while machine is powered off) and set
radio button "USB 3.0 (xHCI controller)". This should do the job.
* Ipod/Ipad doesn't connect to Win7 VM
Make sure USB controller is installed properly (check device manager in
control panel).
If not follow steps above.
------------------
| Accelerated Mode |
------------------
Most mainstream processors today have a feature whereby a virtual machine
can execute code directly on the processor without requiring processor
emulation.
For Intel processor this feature is named VT-x.
If it weren't for this feature virtualization would not be a popular option,
as processor emulation would be very slow.
This feature has to be enabled in the BIOS.
It is often enabled by default, but not necessarily.
If not, you should enable it in the BIOS settings prior to using Virtualbox.
Virtualbox (as the case with other hypervisors) uses this feature, and will
complain if this feature is not enabled.
If running on 64-bit machine and following error occurs:
Can't start virtual machine: VT-x not enabled in BIOS
under details:
"verr_vmx_msr_vmxon_disabled"
In Linux you can check for VT-x by
$ grep vmx /proc/cpuinfo
For AMD SVM
$ grep svm /proc/cpuinfo
Or if you don't know which processor you have:
$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo
Virtualization of Android devices
********************************************************************************
* - Virtualization of Android devices -
********************************************************************************
There are a number of ways to emulate an Android device on your computer.
The two I have experimented with are Genymotion and Google's Android emulator.
The latter is described in its own section.
------------
| Genymotion |
------------
Genymotion is a commercial Android emulation software.
It has a free license available for non business purposes.
Genymotion requires an installation of Virtualbox inorder to run, so before
installing Genymotion, install Virtualbox.
Note, if you use a different hypervisor for your VMs (e.g. kvm, or VMWare),
you will not be able to run Genymotion Android devices concurrently with the
other hypervisor's VMs.
To install Genymotion, open this webpage.
Sign in or create a free account and download Genymotion.
Make genymotion-*.bin executable
$ chmod u+x genymotion-*.bin
Run it
$ ./genymotion-*.bin
Start Genymotion (assuming installation path is /opt):
/opt/Genymobile/genymotion
Press +Add machine. You will be presented with a list of android machines to
install. Select one (e.g. Galaxy 8), and install.
Now you can launch the emulated Android machine.
--------------------------------------
| ADB - Interfacing computer to device |
--------------------------------------
To interface between the host computer and the emulated Android device, you
will need a utility called ADB (Android Debug Bridge).
Note, that an adb can also interface to a physical Android device.
For more on that see this Howtogeek webpage.
Genymotion offers its own adb, but you will want to use an alternative adb
program offered by Android SDK, which will be discussed in its own section.
Install Android SDK as described in section Android SDK.
The SDK's adb app will not allow you to connect to server because Genymotion
will have captured that socket, so first you must set the SDK's adb as
Genymotion's default adb.
To accomplish this go to Genymotion "settings" for the given emulated Android
device.
In settings select ADB.
Change to custom adb (instead of Genymotion's adb).
Select the SDK directory that was installed (e.g. ~/Android/Sdk)
Note, the adb tool is located in Sdk directory under "platform-tools/adb"
To install an App on one of Genymotion's emulated Android device, first download
the app (say ~/Downloads/myapp.apk) and use issue the adb command as follows:
$ ~/Android/Sdk/platform-tools/adb install ~/Downloads/myapp.apk
You might have to root adb first
$ ~/Android/Sdk/platform-tools/adb root
You can then subsequently use this adb to perform various actions
such as running a shell inside the virtual machine.
$ ~/Android/Sdk/platform-tools/adb shell
If you are low on memory see section "Android SDK".
Note, Fedora has a package containing android tools, amongst which is an adb.
To install in Fedora:
$ dnf install android-tools
Although, here too you will want to use Android SDK's adb instead.
----------------------------------------
| Installing Whatsapp in emulated device |
----------------------------------------
Two ways of installing Whatsapp in your emulated android phone are described:
* Download Whatsapp for linux from Whatsapp Linux.
To install follow the general adb install procedure given above for
the Whatsapp.apk file you downloaded.
To backup Whatsapp chat history:
Open whatsapp and select
Settings : Backup : Chat backup
Before proceeding to press the backup button, select "Google Drive Settings".
Press "Backup up to Google Drive" and in the dialog box select "never".
Now press "Backup Button", and the chat history will be backed up locally
in /sdcard/WhatsApp/Databases.
A few backups may be present. The one with the current date on the
phone is what you just backed up.
You can transfer this file using the adb utility:
$ ~/Android/Sdk/platform-tools/adb upload /sdcard/WhatsApp/Databases/filename.db
(Note, you can use adb shell to browse the Whatsapp directory to see
what's there.)
* Another way to install Whatsapp is to download the Whatsapp for Linux APK file
as described above. Then transfer the downloaded file to the Download folder
on the phone's sdcard.
$ adb push ~/Downloads/WhatsApp.apk /sdcard/Download/
Note, downloading the file by way of Whatsapp hasen't worked for me.
It could be because the app being download from the Linux instruction page is
for x86 architecture, whereas Whatsapp downloads the arm version
(but I don't really know).
Open the "files" app in the phone (which should place you in the Download
folder), and click on Whatsapp.apk.
It will ask if you want to install. Answer yes.
This is particularly important when performing an upgrade of Whatsapp.
When upgrading, do not erase the old Whatsapp, as all your chat histories will
be erased as well.
-------------
| Google play |
-------------
Most often you will not be able to download a *.apk file to install directly
as above.
To install apps in a generic manner on your emulated android device you will
need Google Play.
Google play, however, is not available for most emulated devices.
An open source version of Google Play called Open Gapps is
available from opengapps.org.
If using Genymotion refer to the Genymotion manual for three methods by which
to install opengapps on your emulated android device.
According to the manual you should select opengapps with the following
specifications:
* Platform = x86
* Android device version should match your current android installation
(see the Genymotion start widget).
* Variant = nano (the means opengapps with minimal functionality)
Once downloaded you can drag and drop into the emulated device window.
Genymotion will prompt you to begin the transfer, and then to flash the
transfered zip file.
According to some web sites you also need to install an arm translation
package using an "arm translation installer".
See:
* Stackoverflow
* Github
Android SDK
********************************************************************************
* - Android SDK -
********************************************************************************
Android SDK is a development and testing environment for Android applications.
It also comes with an adb, as well as an Android emulator.
(Note, Android SDK was mentioned with regards adb in the previous section.)
------------
| Installing |
------------
Download Android's SDK from here.
Unpack it in $HOME directory.
In terminal issue the command
$ sudo android-studio/bin/studio.sh
The first time it is launched studio.sh will bring up an installation wizard.
On subsequent launches, it will launch the SDK.
For facilitating launching, you can create an alias, such as
$ alias studio='$HOME/android-studio/bin/studio.sh'
When first installing running studio.sh brings up a window providing options to
"start new project" etc. This is meant for developing an Android app.
On the bottom is a settings option allowing you to create a short cut
for launching the SDK.
Otherwise you can launch from terminal or define an alias as mentioned above.
--------------------------
| Creating Virtual Devices |
--------------------------
Use Avd Manager to create virtual android devices with installed Android OS.
From command line, issue
$ ~/Android/Sdk/tools/bin/avdmanager ...
You can also launch this from the SDK's GUI interface. Look for a button on the
tool bar (on top) that shows a balloon saying "AVD Manager".
-------------------------
| Running Virtual Devices |
-------------------------
To list existing devices (virtual Android phone) from the command line
$ $HOME/Android/Sdk/tools/avd -list-avds
To launch a device from the command line
$ $HOME/Android/Sdk/tools/avd -avd devicename
Here too you can create aliases to facilitate launching of devices.
To attach a camera to a virtual phone
$ $HOME/Android/Sdk/emulator/emulator -avd MyAVD -camera-back webcam0 -no-snapshot-load -no-snapshot-save
--------------------
| Memory and storage |
--------------------
To increase internal memory use the SDK avdmanager GUI and edit the device
settings. Select
Advanced Settings : Internal storage
To create an external sdcard:
$ $HOME/Android/Sdk/tools/mksdcard -l sdcard_label 4096M ~/.android/avd/myvirtphone.avd/mysdcard.img
The sdcard will be formatted as FAT.
To identify added sdcard in virtual phone, select
Settings : Storage
You can also use the adb shell, or the command df to veritfy the presence
of the storage device.
Note, sdcard emulation must be enabled for your sdcard to show up in your phone.
If you get a message to the effect "hardware doesn't support sdcard emulation",
then you need to edit the file: ~/.android/avd/yourphone.avd/config.ini, and
set option: hw.sdCard=yes
Make sure to attach your sdcard using the command line option
-sdcard fullpathname.img
-----------------------
| Files and Directories |
-----------------------
Android system images are located in directory
$HOME/Android/Sdk/system-images/...
Android skins (graphical look of phone or tablet) are located in directory
$HOME/Android/Sdk/skins
The startup script is
$HOME/android-studio/bin/studio.sh
Termux
********************************************************************************
- Termux -
********************************************************************************
Termux is an Android app that provides Linux type functionality for Android
based devices. When opened up it presents the user with a terminal interface
running bash, much the same as a Linux console or a terminal.
Termux does not require a device to be rooted. It operates completely within
the confines of its sandboxed environment and has access to device hardware and
system directories only to the extent allowed by the Android OS. Termux files
sit in the directory /data/data/com.termux
User related files (i.e. $HOME) reside in
/data/data/com.termux/files/home
System and other non-user files and directories reside in
/data/data/com.termux/files/usr
The usr directory contains the directories:
bin, etc, include, lib, libexec, sharetmp, and var
Termux provides a single user environment. The user name may be something
like u0_a231 and cannot be modified. Additional users cannot be added, and
there is no /etc/passwd like file.
Termux is based on Debian, however, it has its own (and less comprehensive)
repositories from which packages are obtained. The command pkg is used to
manage package installation and the like rather than Debian's aptitude.
When first running Termux it is recommended to update the system by issuing
the command:
$ pkg upgrade
(At times, repository names may changed and the pkg command may fail. See
troubleshooting section below for what to do in such a case.
In anycase, it is recommended to frequently update the system to avoid the
issue.)
Subsequently, packages can be installed via
$ pkg install package_name
Search for a package via
$ pkg search keyword
----------------------------------
| Remote Access to and from Termux |
----------------------------------
One of the things Termux can be used for, is to access remote accounts via
ssh. To install it, type
$ pkg intall openssh
The same package also provides an ssh server daemon, which allows you to
access Termux from another device or computer on the network. To start the
server, launch sshd
$ sshd
You can also invoke the daemon from an initialization script such as
.bash_profile to have it launched automatically when opening Termux.
You can then access Termux remotely by invoking ssh from the remote device.
For example
$ ssh -p 8022 u0_a231@10.0.1.50
* 8022 (8000 + 22) is the port through which the ssh daemon in Termux
communicates by default.
* u0_a231 is the user name given to you by Termux.
If you are not sure what the username is, issue the command
$ whoami
* 10.0.1.50 is the IP address of the device (substitute accordingly)
You can figure out your Android's IP address by invoking either of these
commands on Termux
$ ip addr
or
$ ifconfig
You will be prompted for a password. If you haven't set one yet, issue the
command passwd to do so.
$ passwd
You can also set up public key authentication to obviate the need for a
password. Go here for instructions on how to do it.
-----
| X11 |
-----
Besides the main Termux repository, additional repositories are available.
Of particular note is the X11 repository. Various window managers and GUI based
software are available in this repository. To add the repository, type
$ pkg install x11-repo
In order to take advantage of a graphical environment you'll need a VNC server
running within Termux. Install TigerVNC in Termux
$ pkg install tigervnc
Launching a Tiger VNC server is as simple as
$ vncserver :2
This will launch a VNC server on display :2
You can launch more than one VNC server, say one on your tablet and one on your
computer, and have access to GUI applications on Termux.
The first time you launch TigerVNC you will need to setup a passwords and
perhaps other things.
Follow this link for more detailed instructions.
When finished kill the vncserver
$ vncserver --kill :2
See here for more about VNC.
To view an X session you will also need a VNC viewer, such as Tiger VNC viewer.
You can either view your X session locally on your android device, or remotely
from another device or computer on the same network.
Once configured, you should be able to launch applications either from the
X session on your VNC viewer, or from Termux itself. In either case make
sure your DISPLAY variable is set correctly. For instance if your VNC server
was launched on display :2, then set the display variable accordingly:
$ export DISPLAY=:2
The more basic and probably least resource intensive window manager is twm.
To install it, type
$ pkg install twm
A more sophisticated and powerful (and yet light on resources) window manager is
FVWM. Install it with
$ pkg install fvwm
If your VNC viewer is running on a phone device or tablet it may take a little
getting used to, as moving the cursor around and simulating various mouse
clicking actions using touch screen gestures takes practice.
Additionaly, on a phone device, you will likely only be able to view a part of
the desktop at a given time since the screens for these devices tend to be
small. The VNC viewer accepts a dragging hand gesture to move around the
viewport.
-------------
| Pulse Audio |
-------------
It is possible to play audio with Termux. Termux uses pulseaudio to manage
sound (see this section for more about pulseaudio).
To test out sound in Termux install SoX
$ pkg install sox
Use the play command to play a sound file. Obtain any sound file,
say mysnd.ogg, and issue the command
$ play mysnd.ogg
SoX supports just about any sound format, and is itself a powerful sound
processing utility (see this section for more about
SoX).
Although, theoretically you should be able to record with SoX using the command
rec (which comes as part of the SoX tool suite), my installation of Termux
doesn't give me access to the microphone to make use of it.
See here for more about playing sound from Termux.
----------
| Andronix |
----------
Termux includes proot, a user space chroot mode that allows you to run another
distro within Termux (See here for more about proot).
Andronix is an app available for Android devices that facilitates installing
on Termux a number of popular Linux distributions such as Debian, Ubuntu,
Fedora, Archlinux and others. Basic Andronix will install an unmodified version
of these distros. A premium version of Andronix will provide you with access to
modified versions of these distros. The modified distros were tweaked to work
more smoothly in a Termux environment. For example, sound works out of the box
with the modified OSs.
The main function of proot is to translate the location of the root directory.
For example if using Andronix to install Debian on Termux, it will treat
Debian's root tree as debian-fs which will reside in Termux's home directory
(/data/data/com.termux/files/home).
Once in the proot environment, a process or program that wants to access a
file, say /etc/passwd, will be accessing (in a transparent manner) the file
/data/data/com.termux/files/home/debian-fs/etc/passwd
The other function of proot is to bind certain system directories, such as
/proc and /dev, to those of the device's Android OS. This is necessary, since
whatever Linux installation you installed via Andronix is not aware that it is
running on Termux (which doesn't have nor need these directories), and thus
expects these standard directories to be present.
It is important to note that the proot environment is not about creating a
virtual machine or container in which the installed distro is running.
In fact, if you launch an application from within the proot environment, and
you issue the ps command within Termux, the process(es) associated with
that application will be listed.
Once installed, you can launch the distro by invoking the small script provided
by Andronix and located in the home directory. For example, for a Debian
installation type
$ ./start-debian.sh
This small script basically invokes the proot command with the necessary
arguments. You can edit the script and uncomment certain lines to permit
access to your Termux home directory and/or sdcard.
-----------------
| Troubleshooting |
-----------------
I have come across a problem of upgrading the Termux environment via
"pkg upgrade" (or equivalently "pkg update") or installing a package when a
mirror changes name. In such a case read the error messages when upgrading,
and extract from the output which repositories are causing the problem (it may
be all of them).
To get an idea of what things should like like, change to the apt directory:
$ cd /data/data/com.termux/files/usr/etc/apt
The main repository is specified in the file sources.list, whose contents
should be something like:
deb https://packages.termux.org/apt/termux-main/ stable main
Supplementary repositories, such as X11, are specified in the directory
sources.list.d
I once ran into a problem where pkg upgrade wasn't updating because of
failure to locate the specified repository mirror. One solution on the WEB
suggested to manually change the location of the main repository.
For example sources.list was
deb https://k51qzi5uqu5dg9vawh923wejqffxiu9bhqlze5f508msk0h7ylpac27fdgaskx.ipns.dweb.link/ stable main
I changed it to
deb https://packages.termux.org/apt/termux-main/ stable main
which fixed the problem.
In such a case it is also a good idea to remove the other repositories as they
have probably also changed, and reinstall them using
pkg install. For example
$ pkg install x11-repo
Xen
********************************************************************************
* - Xen -
********************************************************************************
The Xen Project includes a type-1 (bare-metal) hypervisor and accompanying software.
It is the only type-1 open source hypervisor, and is the basis for some
commercial hypervisors and cloud products.
Xen is an evolving project, and some of the following may be depricated.
For more upto date info and configuration instructions refer to the
ArchWiki article on Xen.
Note, the Xen suite is located in /usr/sbin
xend - the Xen daemon - the following operations are most common
xend start - starts xen daemon
xend stop - stops xen daemon
xend restart - stops and starts again the xen daemon
xm is a command line utilitiy to manage and manipulate domains.
Warning: The Red Hat Customer Portal advises against using xm to manage
the Xen hypervisor. It suggests using virsh or virt-manager instead.
Some common functionality:
$ xm help [--long]
Displays a short [or long] help screen
$ xm info
Displays various information on the computer system including total
memory and available memory.
$ xn list
Lists all domains and various information about them (like cpu time, etc)
$ xm create [-c] configfile
Creates and launches a domain with domain properties configured in
the file "configfile" which is to be found in /etc/xen folder.
Alternatively, a full file path may be specified.
$ xm create [-c] /dev/null [options]
Creates and launches a domain with domain properties specified by
command line options.
$ xm shutdown
Prompts the os running on the virtual machine to shutdown. If the OS is
hung (i.e. crashed), then you will need to use the next command.
$ xm destroy
Effectively "pulls the cord" out of the virtual machine.
Doesn't give it a chance to shut itself down.
$ xm pause domain-id
Pauses a domain so that it is not allocated cpu time, although it will
continue to receive its allocation of memory.
$ xm unpause domain-id
Unpause a domain.
$ xm save domain-id state-file
Saves a domain into a state file ("state-file"), for subsequent restoration.
$ xm restore state-file
Restores domain associated with "state-file".
Example of creating the domain "ramdisk":
$ xm create -c /dev/null ramdisk=/boot/initrd-2.6.23.1-42.fc8.img \
kernel=/boot/vmlinuz-2.6.21.7-3.fc8xen \
name=ramdisk vcpus=1 \
memory=64 root=/dev/ram0
In this example we used command-line configuration options:
* Create ramdisk image and load it with /boot/initrd-2.6.23.1-42.fc8.img
* Kernel is the xen kernel "/boot/vmlinuz-2.6.21.7-3.fc8xen"
* Domain name (name) is "ramdisk"
* Number of cpus allocated to the domain (vcpus) is 1
* Memory allocation (memory) is 64MB
* Root directory (/) is /dev/ram0 (meaning / sits in ram and not on disk)
Other virtual machine management tools:
* virt-clone
* virt-image
* virt-install
* virt-manager
* virt-viewer
Example of booting a live-disk:
$ virt-install --name demo --ram 256 --nodisk --vnc --cdrom /mnt/images/boot.iso
Wine
********************************************************************************
* - Wine -
********************************************************************************
Wine is a software bundle that enables Windows applications to run on
POSIX-compliant OS's such as Linux, MAC OS, and (free) BSD.
The way it works is by translating Windows API calls to POSIX System calls as
they are issued.
Wine is a constantly evolving project and aims to keep up with developments
in Windows, but you should be aware that not all Windows applications will
launch or run correctly with Wine.
For a comprehensive ranking of Windows applications in terms of their
compatibility with Wine refer to App Database.
If you plan on using wine it is a good idea to read wine's FAQ.
Any Windows application you wish to use with Wine, should be installed using
its installer run from your Unix using wine. For instance:
$ wine InstallerForAppSuchAndSuch.exe
Once installed, you can execute the app via the command line:
$ wine AppSuchAndSuch.exe
You can also launch the app using a graphical interface.
Warning: Do not attempt to run Windows applications on an existing Windows
installation! This is sure to render your Windows installation unusable.
The wine directory, in which wine related configuration files are located,
as well as the c_drive, is: $HOME/.wine
Wine comes with a number of administration tools, the most important being:
* winecfg - A configuration application.
Run this program the first time after wine has been installed, and it will
create a ~/.wine directory and place all the necessary files in it.
It is also used for general configuration needs such as graphics handling,
library overrides, and so forth.
* regedit - a registry configuration application.
Use this to manually set and/or delete items in the registry.
The registry in wine is text based (unlike MS-Windows).
Built-in applications:
* cmd.exe
* control.exe
* ddhelp.exe
* dosx.exe
* msiexec.exe
* notepad.exe
* progman.exe
* regsvr32.exe
* rundll32.exe
* winebrowser.exe
* winhlp32.exe
* winver.exe
Directory where builtin applications are found:
$HOME.wine/drive_c/windows/system32
Wine is a compatibility layer for Windows. It has its own set of DLLs
(Dynamic Link Library - Windows library functions) to match those found in
Windows installations.
Specialized DLLs may not be available with Wine. In such a case the native
Windows DLL will have to be installed in the appropriate place in drive_c.
Using such libraries, however, requires emulation, which comes at the cost of
performance. It is therefore always best to stick to wine DLLs whenever
possible.
SNAPS
********************************************************************************
* - SNAPS -
********************************************************************************
Snap is an app store for Linux.
Snap attempts to provide Linux with what GooglePlay and Apple's App Store
provide to smartphones and tablets. It is most useful for installing apps that
are unsported by your Linux' repository.
Developers publish their app on snap, and the app is then self contained
(sandboxed), as it contains its own library dependencies.
The degree of sandboxing depends on which snap interfaces have been
incorporated into the app package (specified in snapcraft.yaml file).
For a list of shared resources (interfaces), and their default behavior,
see this webpage.
One important interface is that which allows access to files in one's home
directory. Classic snap supports this by default. Standard snap, however,
doesn't.
Example: installing pdftk on fedora (based on this this webpage)
$ sudo dnf install snapd
$ sudo ln -s /var/lib/snapd/snap /snap # To enable classic snap support
$ sudo snap wait system seed.loaded # See this webpage
$ sudo snap install app # Substitute "app" with the app you wish to install
----------------------
| To manage interfaces |
----------------------
For a guide to snap permissions refer to this webpage:
To view permissions for an installed snap, there is a button in the "Software"
program.
To see all interfaces and which snap apps utlize them
$ snap interfaces
To do so for a specific snap
$ snap interfaces name_of_app
To allow the pdftk snap to access files in the home directory
$ snap home pdftk
For newer versions of snap use "connections" instead of interface
$ snap connections name_of_app
See man pages for more
$ man snap
CPU
********************************************************************************
- CPU -
********************************************************************************
The CPU is the "brain" of the computer. It executes machine language
instructions sequentially as they are fetched from memory. The first
instructions that are processed (when the computer is booted) come from the
computer's BIOS. The BIOS chip contains a program that looks for a storage
device that contains a bootable operating system (OS) and hands control over to
the OS (in actuality a bootloader program is activated which loads up the OS
and launches it).
The BIOS also loads basic drivers to interact with the computer's peripherals,
such as keyboard, disk drives, etc.
In modern computers, the traditional BIOS has been replaced with the UEFI
system which is file system aware, and can load drivers from arbitrary
locations in the file system. This gives it far more flexibility than the
more rigid BIOS. Go here for more.
A CPU comes with a heat sink.
All desktops, and most laptops come with a fan that connects on top of the
CPU's heat sink.
In desktop computers, one or more additional fans are usually situated inside
the chasis.
The two main manufacturers of CPU's based on Intel's x86 instruction set are
Intel and AMD.
Both have a long list of CPU's they have developed over the years.
Both essentially run the same instruction set, so the software that runs on
one is generally considered compatible with the other.
The current dominant Intel CPUs are those in the i series:
i3, i5, i7 and i9.
The higher the number, the more advanced is the processor.
However, the generation of the processor is also a factor.
The i3, i5 and i7 have been around since 2008, and successive releases are
referred to by their generation.
There are many features that are associated with a given processor, such as
number of cores, number of threads, virtualization technology, cache,
built in GPU, maximum clock speed and power consumption.
Other developments and improvements in technology that differentiate one
generation of processors with another might be less known or publicized,
but nonetheless have an impact on performance.
Usually, within a given generation the i3 would be the least performant in
the series, whereas the i9 would be the most.
For example a 14th generation i3 may have up to 10 cores, whereas an i9 may
have upto 24 cores.
It is not necessarily straight forward to compare processors of different
generations, as a 14th generation i3 processor will in general outperform
a 2nd generation i5.
Just to get an idea of the advancements made in this series over the years:
Around 2010 (2nd gen - Sandy Bridge) main features were:
Max clock 3GHz.
i3 - 2 cores, without virtualization technology
i5 - 4 cores, with virtualization technology
i7 - 8 cores, with virtualization technology
In 2024 (14th gen - Meteor Lake and Raptor Lake) all have virtualization technology:
Max clock around 5GHz
i3 - upto 10 cores
i5 - upto 14 cores
i7 - upto 20 cores
i9 - upto 24 cores
A good article on the evolution of these processors can be found here.
CPU
********************************************************************************
- Motherboard -
********************************************************************************
A CPU (unlike a microcontroller) cannot function as a standalone computer. It
must interface to external memory, BIOS and peripherals. Modern CPUs usually
require a cooling apparatus.
The motherboard is a printed circuit board (PCB) that houses the CPU and other
components that make up the core of the computer.
Some of notable components in a motherboard are
* A slot for the CPU
* Memory (RAM) slot(s)
* BIOS chip
* Core chipset (Northbridge and Southbridge)
The Northbridge (host bridge) interfaces between the CPU and memory as well
as PCI express slots.
The southbridge interfaces indirectly between CPU (via the northbridge) and
the peripherals (SATA, USB, etc).
In general, components that require high speed communication with the CPU
are handled by the northbridge, and those that do not are handled by the
southbridge. In most modern CPUs, functions that were once handled by the
northbridge are integrated into the CPU.
* Expansion slots
Most modern motherboards house ePCI and m.2 expansion slots.
Older motherboards may contain PCI and AGP slots.
The PCI expansion slot standard is much slower than ePCI.
AGP (the precursor to ePCI) was introduced to accomodate high speed graphics
cards.
Today all high speed graphics cards plug into an ePCI slot.
Some more powerful graphics cards utilize two ePCI slots to increase
communications bandwidth with the CPU.
* Video (e.g. VGA, DVI, HDMI), USB, ethernet and audio sockets arranged so as to
be exposed on the chasis back side.
Gaming motherboards may have PS/2 keyboard and mouse sockets.
Older motherboards may have PS/2 keyboard and mouse sockets, RS232 serial
port and a parallel port connectors.
* Onboard connectors for SATA ports
* Additional USB ports connectors
* CPU fan socket and one or more sockets for fans that reside in the chasis.
* Power supply socket
Motherboards come in many sizes and shapes.
A standard desktop chasis usually accomodate the ATX motherboard standard.
Smaller variants of this standard include (in decreasing size) Micro-ATX,
Mini-ITX, Nano-ITX and Pico ITX.
A smaller chasis may accomodate one of the smaller ATX variant form factors,
but not the standard ATX form factor. Therefore when purchasing a chasis, and
motherboard separately, verify that the form factors are compatible.
Some of the common mother form factors are listed below.
* Standard ATX
305 x 244mm (12 x 9.6 in)
305mm dimension is fixed, whereas the other dimension may differ.
Contains 4 RAM slots and upto 7 expansions slots.
Requires an ATX case.
Note, some motherboards are desiged for crypto currency mining. Such
motherboards contain many expansion slots suitable for inserting multiple GPU
(graphical processor units). These GPUs are used for performing the massively
parallel comutations required.
* microATX
244 x 244 (9.6 x 9.6 in)
One the dimension is fixed, other may be smaller.
It is compatible with an ATX case.
Typically contains 2 RAM slots, although may contain upto 4.
Contains up to 4 expansion slots.
* Mini ATX - A few form factors go by this name:
150 x 150mm (5.9 x 5.9in)
284 x 208mm (11.2 x 8.2in)
* Intel NUC
This is a miniature computer. The motherboard usually measures 4x4 in
(102 x 102 mm).
The chasis is commonly 4.5x4.5x1.5 in.
See this Wikipedia entry for a comprehensive list of form factors.
------------------
| The power supply |
------------------
Modern ATX boards contain a 24 pin power supply socket.
The ATX power supply comes with a 24 pin connector. The connector contains a
latch on one side that so that it can only be plugged into the socket in the
intended manner.
The connector has a detachable four pin portion, making the power supply
backward compatible with older ATX boards that have 20 pin sockets.
A second smaller connector, commonly referred to as the P4 connector, plugs
into the motherboard.
The 24-pin connector pinout is:
_____
+3.3V | 1|13| +3.3 V
+3.3V | 2|14| -12 V
COM | 3|15| COM
+5V | 4|16| PS_ON#
COM | 5|17||COM \ Protrusion to prevent
+5V | 6|18||COM / pluging in reverse
COM | 7|19| COM
PWR_OK | 8|20| NC
+5VSB | 9|21| +5V
+12V1 |10|22| +5V
+12V1 |11|23| +5V \ Detachable
+3.3V |12|24| COM / Portion
-----
The P4 connector
_____
COM | 1| 3||+12V2
COM | 2| 4||+12V2
-----
When detached from the motherboard, it is possible to power up the power supply
by shorting pin 16 with any of the COM pins (e.g. pin 15).
Expansion Slots
********************************************************************************
- Expansion Slots -
********************************************************************************
Revision History
********************************************************************************
- Revision History -
********************************************************************************
2020-06-17us: Comprehensive editing
2020-10-21us: Major revision
2021-01-04us: Added part IX - computer architecture, plus minor revisions
2021-01-18us: Minor changes to Image Manipulation and Conversion Utilities section.
Added description of efix utility in Mgetty-Sendfax subsection.
Split efax related stuff into two subsections efax section from efax-gtk section.
2021-05-02us: Expanded description of efix utility in Mgetty-Sendfax subsection.
2021-05-03us: Added subsection Reading from standard input in Bash Tutorial.
2021-05-07us: Added more to sudo. See here.
2021-05-19us: Expanded Laptop section, and GNOME Desktop sections.
2021-06-06us: Expanded subsection GDM to include about automatic
login and disabling Wayland.
2021-06-17us: Added section on Termux.
2021-07-13us: Added section on Non-Graphical Login.
Incporporated items from troubleshootman into corresponding sections.
2021-07-23us: Added section on Thumbnails with ImageMagick.
2021-11-11us: Added section on Alpine Passwords and Gmail two-step verification.
2022-02-20us: Added subsection on efibootmgr.
2022-03-07us: Added subsection on KVM UEFI boot.
2022-03-13us: Added subsection on the Bash select command for generating simple menus.
2022-03-14us: Added more to tmux in subsectionTerminals.
2022-09-10us: Added subsection on automatic mounting of encrypted file system at boot.
2023-05-19us: Added subsection on folds in VIM.
2023-05-31us: Expanded on subsection Windows in section VIM Essentials.
2024-05-16us: Added subsection on how to configure alpine to work with Gmail's more strict authentication requirements.
Expanded on section CPU.
2024-05-28us: Tmux which was previously a subsection of Terminals, was moved to its own section
Tmux, and expanded upon.
2024-12-17us: Edited subsection on X11 fonts, and added subsection on Xft font system.